YOLOV8

YOLOV8

Helpers

Training

class Args(argparse.Namespace):
  model = 'yolov8l.pt'
  cfg = 'default.yaml'
  iterative_steps = 15
  target_prune_rate = 0.15
  max_map_drop = 0.2
  sched = Schedule(partial(sched_onecycle,  α=10, β=4))

args=Args()
prune(args)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3846.6±2037.7 MB/s, size: 53.3 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.731      0.768      0.828       0.66
Speed: 0.7ms preprocess, 3.3ms inference, 0.0ms loss, 2.2ms postprocess per image
Results saved to runs/detect/val7
Before Pruning: MACs= 82.72641 G, #Params= 43.69152 M, mAP= 0.66035
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=train7, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/train7, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3560.8±1144.6 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1679.0±396.4 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/train7/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/train7
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      17.6G     0.8369     0.7191      1.072        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.774      0.763      0.839      0.674

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.1G     0.8351      0.665      1.061        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.826      0.783       0.85      0.689

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.2G     0.8322     0.6222      1.066        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.858      0.794       0.86      0.704

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.2G     0.8023     0.5615      1.029         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.896      0.793       0.87      0.717

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.1G     0.7755      0.521      1.012         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.879      0.824       0.89      0.731

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.1G     0.7552     0.5039      1.011        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.869       0.84      0.892      0.738

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.1G     0.7342     0.4821     0.9817         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.885      0.835      0.896      0.749

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.2G     0.7389     0.4766     0.9989        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.884      0.855      0.904      0.762

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.2G     0.7197     0.4778     0.9785        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.875      0.866      0.909      0.767

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.2G     0.7149      0.457      1.007        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.882      0.867      0.911      0.768

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/train7/weights/last.pt, 175.3MB
Optimizer stripped from runs/detect/train7/weights/best.pt, 175.3MB

Validating runs/detect/train7/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.883      0.867      0.911      0.768
Speed: 0.1ms preprocess, 2.7ms inference, 0.0ms loss, 0.3ms postprocess per image
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5500.5±982.8 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.901      0.849      0.904      0.769
Speed: 0.1ms preprocess, 5.4ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/baseline_val
Before Pruning: MACs= 82.72641 G, #Params= 43.69152 M, mAP= 0.76904
Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.0027046189978777607
After Pruning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,081,939 parameters, 74,176 gradients, 162.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5675.7±1427.5 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.878      0.863      0.903      0.748
Speed: 0.1ms preprocess, 6.8ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_0_pre_val
After post-pruning Validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 1: MACs=81.5020432 G, #Params=43.105009 M, mAP=0.7480799419444839, speed up=1.0150224847369225
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_0_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_0_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4199.8±1440.5 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1993.0±415.7 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_0_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_0_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      17.3G     0.6682     0.4222     0.9629        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.901      0.849      0.908      0.756

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.3G     0.6351     0.3917     0.9467        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.907      0.847      0.915      0.757

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.5G     0.6704     0.4248     0.9809        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.904      0.854      0.918      0.762

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.4G     0.6577     0.3918      0.955         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.901      0.857      0.919      0.768

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.6G     0.6374     0.3958     0.9421         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.892      0.868      0.917      0.775

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.6G     0.6424     0.4056     0.9488        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.903      0.867      0.917      0.776

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.4G      0.628     0.3976     0.9314         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.923      0.856      0.921      0.783

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.5G     0.6647     0.3993      0.963        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.867      0.926       0.79

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.4G     0.6561     0.4047     0.9421        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.907      0.881      0.929      0.793

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.5G     0.6618      0.416     0.9685        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913       0.88      0.931      0.794

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_0_finetune/weights/last.pt, 173.0MB
Optimizer stripped from runs/detect/step_0_finetune/weights/best.pt, 173.0MB

Validating runs/detect/step_0_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,081,939 parameters, 0 gradients, 162.7 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913       0.88      0.931      0.794
Speed: 0.1ms preprocess, 3.1ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,081,939 parameters, 0 gradients, 162.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5127.1±1250.8 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.919      0.875      0.928       0.79
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_0_post_val
After fine tuning mAP=0.7902131736934158
After post fine-tuning validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.005179586515491673
After Pruning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,712,366 parameters, 74,160 gradients, 161.3 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5252.0±1282.8 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.855      0.926      0.784
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_1_pre_val
After post-pruning Validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 2: MACs=80.7933916 G, #Params=42.735334 M, mAP=0.7843893707557463, speed up=1.0239254072854147
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_1_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_1_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3475.7±1351.6 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1459.9±423.3 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_1_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_1_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      17.1G     0.5668     0.3537     0.9157        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.866       0.93      0.789

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.2G     0.5344     0.3429     0.9029        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.92      0.886      0.937      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.2G     0.5649     0.3446     0.9291        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.918      0.885      0.936      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.3G     0.5479     0.3429     0.9087         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.925      0.875      0.938        0.8

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.5G     0.5515     0.3491     0.8995         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.875      0.938      0.799

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.5G     0.5535     0.3455     0.9062        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.903      0.879      0.936      0.799

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.3G     0.5605      0.353     0.8941         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.91      0.881       0.94      0.804

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.4G     0.6074     0.3693     0.9276        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921       0.89      0.944      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.4G     0.5933     0.3803     0.9049        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.895      0.945      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.6G     0.6217     0.3959     0.9434        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.896      0.946      0.817

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_1_finetune/weights/last.pt, 171.5MB
Optimizer stripped from runs/detect/step_1_finetune/weights/best.pt, 171.5MB

Validating runs/detect/step_1_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,712,366 parameters, 0 gradients, 161.3 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.896      0.946      0.817
Speed: 0.1ms preprocess, 3.1ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,712,366 parameters, 0 gradients, 161.3 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5559.9±1213.1 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.925      0.887      0.939      0.807
Speed: 0.2ms preprocess, 6.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_1_post_val
After fine tuning mAP=0.807224186903875
After post fine-tuning validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.009769531739708686
After Pruning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 74,160 gradients, 158.8 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4357.7±655.2 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.879      0.942      0.802
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_2_pre_val
After post-pruning Validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 3: MACs=79.5541908 G, #Params=42.117503 M, mAP=0.8017145052594012, speed up=1.0398749024796818
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_2_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_2_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3830.3±1473.9 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1639.3±480.6 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_2_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_2_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      16.9G     0.5199     0.3244     0.8907        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.881      0.944      0.812

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.2G     0.5038     0.3259     0.8853        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.887      0.941       0.81

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.2G     0.5075     0.3171     0.9042        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.914      0.895      0.948      0.813

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.2G     0.5008     0.3164     0.8908         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.92      0.887      0.944      0.812

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.2G     0.4901     0.3191     0.8742         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934       0.88      0.945      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.3G     0.4969     0.3177     0.8799        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.936      0.887      0.947      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.1G     0.5126     0.3256     0.8695         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.912      0.904       0.95       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.3G     0.5631     0.3562     0.9061        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.918      0.904      0.953      0.821

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.3G     0.5603     0.3584     0.8904        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.924      0.898      0.952      0.823

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.6G     0.6014     0.3852     0.9412        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.929      0.897      0.952      0.826

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_2_finetune/weights/last.pt, 169.0MB
Optimizer stripped from runs/detect/step_2_finetune/weights/best.pt, 169.0MB

Validating runs/detect/step_2_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 0 gradients, 158.8 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.929      0.897      0.952      0.826
Speed: 0.1ms preprocess, 3.1ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 0 gradients, 158.8 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1953.9±892.9 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.893       0.95       0.82
Speed: 0.2ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_2_post_val
After fine tuning mAP=0.8196362847789926
After post fine-tuning validation
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.017924759478681728
After Pruning
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,919,781 parameters, 74,160 gradients, 154.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5161.9±848.4 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.914       0.87      0.936      0.783
Speed: 0.1ms preprocess, 6.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_3_pre_val
After post-pruning Validation
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 4: MACs=77.3600192 G, #Params=40.942254 M, mAP=0.782782444051276, speed up=1.0693690003634333
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_3_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_3_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3738.8±1510.8 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1659.4±471.2 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_3_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_3_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      16.7G      0.533     0.3392     0.8902        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.929      0.865      0.938      0.799

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.4G     0.4804       0.31      0.871        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.891      0.943      0.815

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.4G     0.4873     0.3176     0.8843        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.925      0.891      0.942      0.817

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10        17G     0.4908     0.3098     0.8743         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.886      0.943      0.821

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      16.9G     0.4684     0.3018     0.8614         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.916      0.894      0.944       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.1G     0.4781     0.3192      0.862        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917      0.891      0.944       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10        17G     0.5015     0.3257     0.8657         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.893      0.951      0.826

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        17G     0.5618     0.3555     0.8989        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.935      0.893      0.953      0.832

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.1G     0.5484     0.3455       0.88        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.928      0.895      0.953      0.832

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.4G     0.5878     0.3835     0.9243        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.925      0.892      0.952      0.834

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_3_finetune/weights/last.pt, 164.3MB
Optimizer stripped from runs/detect/step_3_finetune/weights/best.pt, 164.3MB

Validating runs/detect/step_3_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,919,781 parameters, 0 gradients, 154.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.925      0.892      0.952      0.834
Speed: 0.1ms preprocess, 3.0ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,919,781 parameters, 0 gradients, 154.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4332.0±1294.7 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.923       0.89      0.948      0.833
Speed: 0.2ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_3_post_val
After fine tuning mAP=0.8329326575889358
After post fine-tuning validation
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.03136884242508382
After Pruning
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 39,455,305 parameters, 74,160 gradients, 149.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4983.8±422.4 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.915      0.877      0.937      0.794
Speed: 0.1ms preprocess, 6.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_4_pre_val
After post-pruning Validation
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 5: MACs=74.8418608 G, #Params=39.477376 M, mAP=0.7937988689018066, speed up=1.1053494062777232
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_4_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_4_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3944.1±1349.5 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 851.4±239.0 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_4_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_4_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      16.4G     0.5412     0.3505     0.8826        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.884      0.942      0.803

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      16.6G     0.4801      0.311      0.862        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.906      0.894      0.947      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      16.6G     0.4775     0.3041      0.872        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913      0.893      0.948      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      16.7G     0.4767     0.3017     0.8603         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.909      0.894      0.947       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      16.7G     0.4872     0.3068     0.8659         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.887      0.947      0.815

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      16.7G     0.4826     0.3129       0.86        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.878      0.943      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      16.7G     0.5067     0.3249     0.8598         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.938      0.881      0.945      0.817

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      16.7G     0.5403     0.3384     0.8883        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.885      0.946       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      16.8G     0.5609     0.3507     0.8826        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94      0.888      0.948      0.824

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      16.6G     0.5955     0.3752     0.9273        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.889      0.948      0.825

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_4_finetune/weights/last.pt, 158.5MB
Optimizer stripped from runs/detect/step_4_finetune/weights/best.pt, 158.5MB

Validating runs/detect/step_4_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 39,455,305 parameters, 0 gradients, 149.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.889      0.948      0.825
Speed: 0.1ms preprocess, 3.0ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 39,455,305 parameters, 0 gradients, 149.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4391.9±1982.1 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931      0.892      0.948      0.827
Speed: 0.2ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_4_post_val
After fine tuning mAP=0.8272230343997624
After post fine-tuning validation
Model Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.051012679818528694
After Pruning
Model Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 74,160 gradients, 143.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4918.3±657.9 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.901       0.86      0.925      0.767
Speed: 0.1ms preprocess, 6.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_5_pre_val
After post-pruning Validation
Model Conv2d(3, 60, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 6: MACs=71.732976 G, #Params=37.730325 M, mAP=0.7673209592104678, speed up=1.1532549046898597
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_5_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_5_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4051.3±1310.0 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1346.5±283.8 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_5_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_5_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      16.1G     0.5751     0.3595     0.8923        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.916      0.875      0.932      0.782

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      16.3G     0.5115     0.3291     0.8669        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.887      0.939      0.791

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      16.3G     0.4856     0.3229      0.878        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.882      0.941      0.792

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      16.3G     0.4941     0.3111     0.8656         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.929      0.888      0.947      0.804

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      16.3G     0.4775     0.3146     0.8614         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931      0.887      0.944      0.805

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      16.4G     0.5039     0.3229     0.8672        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.897      0.942      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      16.3G     0.5039     0.3256     0.8601         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.885      0.941      0.813

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      16.4G      0.552      0.351     0.8934        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.936      0.884      0.946      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      16.4G     0.5808     0.3612      0.891        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.929      0.895      0.948      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      16.3G     0.6055     0.3872      0.936        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.898      0.949      0.822

10 epochs completed in 0.009 hours.
Optimizer stripped from runs/detect/step_5_finetune/weights/last.pt, 151.5MB
Optimizer stripped from runs/detect/step_5_finetune/weights/best.pt, 151.5MB

Validating runs/detect/step_5_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 0 gradients, 143.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.898      0.949      0.822
Speed: 0.1ms preprocess, 2.9ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 0 gradients, 143.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4543.0±2621.0 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917      0.896      0.945      0.821
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_5_post_val
After fine tuning mAP=0.8206992215945592
After post fine-tuning validation
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.07518590641324997
After Pruning
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,995,675 parameters, 74,160 gradients, 136.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5429.6±1306.8 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.896      0.825      0.912      0.749
Speed: 0.1ms preprocess, 6.4ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_6_pre_val
After post-pruning Validation
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 7: MACs=68.4860368 G, #Params=36.016747 M, mAP=0.7488644175882014, speed up=1.207930992438447
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_6_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_6_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3739.6±1602.1 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1726.2±473.0 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_6_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_6_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.6G     0.5731     0.3602     0.8969        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917      0.852      0.929      0.781

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.8G     0.5205     0.3361     0.8819        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.902      0.884      0.937      0.798

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.9G     0.4968     0.3452     0.8811        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.907      0.892      0.942      0.805

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.9G     0.5077     0.3303     0.8692         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929        0.9      0.894       0.94      0.809

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.9G     0.5099     0.3369     0.8692         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.916      0.889      0.937      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.9G     0.5154     0.3385     0.8712        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.92      0.893      0.939      0.801

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.9G     0.5223     0.3358     0.8692         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.898      0.904      0.939      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        16G     0.5637      0.354     0.8967        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.906      0.898      0.938      0.809

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        16G     0.5919     0.3694      0.901        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.914      0.897      0.939      0.813

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.8G     0.6332     0.4071      0.943        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917      0.893       0.94      0.813

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_6_finetune/weights/last.pt, 144.6MB
Optimizer stripped from runs/detect/step_6_finetune/weights/best.pt, 144.6MB

Validating runs/detect/step_6_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,995,675 parameters, 0 gradients, 136.7 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917      0.893       0.94      0.813
Speed: 0.1ms preprocess, 2.8ms inference, 0.0ms loss, 0.4ms postprocess per image
After fine-tuning
Model Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,995,675 parameters, 0 gradients, 136.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5162.0±828.0 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.887      0.946      0.815
Speed: 0.1ms preprocess, 6.5ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_6_post_val
After fine tuning mAP=0.8150353523112258
After post fine-tuning validation
Model Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.09935913300797124
After Pruning
Model Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 34,583,399 parameters, 74,160 gradients, 131.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5541.5±1410.7 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.843      0.884       0.92      0.766
Speed: 0.2ms preprocess, 6.2ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_7_pre_val
After post-pruning Validation
Model Conv2d(3, 57, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 8: MACs=65.8289424 G, #Params=34.604045 M, mAP=0.7662475456515344, speed up=1.2566874597092115
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_7_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_7_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3985.9±1454.0 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 812.7±173.0 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_7_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_7_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.4G     0.5617     0.3587     0.8919        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.883      0.867      0.924      0.781

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.5G        0.5     0.3217     0.8684        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.863      0.927      0.791

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.5G     0.4909     0.3294      0.884        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.924      0.869       0.93      0.793

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.6G     0.4929     0.3229     0.8705         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.882      0.903      0.934      0.795

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.6G     0.4975     0.3312     0.8646         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.889      0.895      0.935      0.801

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.6G     0.5017     0.3367     0.8697        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913      0.889      0.935      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.6G     0.5345     0.3396     0.8669         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.923      0.881      0.936        0.8

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.5G     0.5751     0.3614     0.8977        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.924      0.882      0.937        0.8

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.5G     0.5991     0.3909     0.8971        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.916      0.889       0.94      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.5G     0.6296      0.407      0.937        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.892      0.941       0.81

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_7_finetune/weights/last.pt, 139.0MB
Optimizer stripped from runs/detect/step_7_finetune/weights/best.pt, 139.0MB

Validating runs/detect/step_7_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 34,583,399 parameters, 0 gradients, 131.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.921      0.891      0.941       0.81
Speed: 0.1ms preprocess, 2.7ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 34,583,399 parameters, 0 gradients, 131.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5065.6±1461.7 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913      0.894      0.943      0.813
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_7_post_val
After fine tuning mAP=0.8132784771857411
After post fine-tuning validation
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.11900297040141611
After Pruning
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 74,160 gradients, 128.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5359.4±1196.1 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931      0.859      0.932      0.786
Speed: 0.1ms preprocess, 5.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_8_pre_val
After post-pruning Validation
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 9: MACs=64.3900056 G, #Params=33.768007 M, mAP=0.7864229772631458, speed up=1.2847709148203583
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_8_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_8_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4027.1±1719.2 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1657.3±416.8 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_8_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_8_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.3G     0.5136     0.3353     0.8737        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.914       0.88      0.938      0.803

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.3G     0.4621     0.2981     0.8555        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.888        0.9      0.941      0.809

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.2G     0.4527     0.3111     0.8674        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.957      0.858      0.939      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.2G     0.4709      0.312     0.8606         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.901      0.898      0.941      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.4G     0.4727     0.3065     0.8574         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
       ^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.893        0.9      0.942      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.4G     0.4873     0.3299     0.8622        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.884      0.941      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.3G     0.5022     0.3266     0.8596         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931       0.88      0.943      0.804

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.3G     0.5419     0.3384     0.8849         89        640:  88%|█████Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
       ^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
       ^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
       8/10      15.3G     0.5583     0.3447     0.8908        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.888      0.945      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.3G      0.574     0.3601     0.8862         68        640:  75%|█████Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
       ^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__>
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1654, in __del__
    self._shutdown_workers()
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/torch/utils/data/dataloader.py", line 1637, in _shutdown_workers
    if w.is_alive():
       ^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: can only test a child process
       9/10      15.3G       0.58     0.3647     0.8901        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.932      0.881      0.942      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.2G     0.6333     0.4065     0.9388        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.926      0.883      0.941      0.811

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_8_finetune/weights/last.pt, 135.6MB
Optimizer stripped from runs/detect/step_8_finetune/weights/best.pt, 135.6MB

Validating runs/detect/step_8_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 0 gradients, 128.5 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931       0.88      0.942      0.811
Speed: 0.1ms preprocess, 2.6ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 0 gradients, 128.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4779.0±951.4 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.92      0.892      0.943      0.806
Speed: 0.1ms preprocess, 5.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_8_post_val
After fine tuning mAP=0.8060252598377426
After post fine-tuning validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.1324470533478182
After Pruning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,209,910 parameters, 74,160 gradients, 126.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5056.1±1859.8 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.917       0.88      0.942      0.785
Speed: 0.1ms preprocess, 5.3ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_9_pre_val
After post-pruning Validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 10: MACs=63.4942128 G, #Params=33.230145 M, mAP=0.784538139397141, speed up=1.302896795658832
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_9_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_9_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4710.4±1405.8 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1436.4±536.2 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_9_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_9_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.1G     0.5083     0.3199     0.8726        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.923      0.878       0.94      0.795

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.2G     0.4453     0.2858     0.8463        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.877       0.94      0.805

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.1G     0.4374     0.2928     0.8633        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.904       0.88      0.942      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.1G     0.4528     0.2921      0.854         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.889      0.904      0.942      0.806

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.2G     0.4514     0.2977     0.8483         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.885      0.919      0.951      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.2G     0.4622     0.3078     0.8546        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.877      0.917       0.95      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.1G     0.4877     0.3142     0.8533         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.903      0.909      0.952      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.1G     0.5348     0.3373     0.8856        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.913      0.909      0.951      0.813

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.2G     0.5581     0.3507     0.8842        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.918      0.912      0.951      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.1G     0.6125     0.3898     0.9361        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.918      0.911      0.952      0.817

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_9_finetune/weights/last.pt, 133.4MB
Optimizer stripped from runs/detect/step_9_finetune/weights/best.pt, 133.4MB

Validating runs/detect/step_9_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,209,910 parameters, 0 gradients, 126.7 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.918      0.911      0.952      0.817
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,209,910 parameters, 0 gradients, 126.7 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 2069.3±576.5 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.922      0.904      0.952      0.817
Speed: 0.1ms preprocess, 5.4ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_9_post_val
After fine tuning mAP=0.816555667499456
After post fine-tuning validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.14060228108679124
After Pruning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,703,049 parameters, 74,160 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5530.2±1394.7 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.865      0.939      0.795
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_10_pre_val
After post-pruning Validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 11: MACs=62.4345712 G, #Params=32.723122 M, mAP=0.7950424487460809, speed up=1.3250096030130178
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_10_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_10_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4167.3±1478.2 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 773.9±123.2 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_10_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_10_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.1G     0.4828     0.3114     0.8639        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.877      0.942      0.799

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.1G     0.4196     0.2732     0.8407        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.899      0.904      0.943      0.809

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10        15G     0.4306     0.2861     0.8556        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.959      0.861      0.944      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10        15G     0.4263     0.2811     0.8445         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.952      0.865      0.944      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10        15G     0.4355     0.2887     0.8438         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.944      0.873      0.946      0.813

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10        15G     0.4476     0.2933     0.8465        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.915        0.9      0.947       0.81

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10        15G     0.4656     0.3029     0.8469         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.92      0.895      0.945      0.812

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.1G     0.5244     0.3296     0.8832        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.946      0.882      0.948      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        15G     0.5514     0.3476     0.8831        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.943      0.894      0.951      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10        15G     0.6126      0.378      0.933        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.943      0.895      0.951      0.817

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_10_finetune/weights/last.pt, 131.4MB
Optimizer stripped from runs/detect/step_10_finetune/weights/best.pt, 131.4MB

Validating runs/detect/step_10_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,703,049 parameters, 0 gradients, 124.6 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.943      0.894      0.951      0.818
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,703,049 parameters, 0 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4396.4±2530.2 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.896       0.95      0.814
Speed: 0.2ms preprocess, 6.3ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_10_post_val
After fine tuning mAP=0.8142403277090464
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.14519222631100823
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 74,160 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5125.9±924.0 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927        0.9      0.949      0.814
Speed: 0.2ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_11_pre_val
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 12: MACs=62.4070664 G, #Params=32.689204 M, mAP=0.8135646119344967, speed up=1.325593577332454
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_11_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_11_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3971.3±1585.4 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 723.8±135.6 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_11_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_11_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      14.9G      0.396     0.2713     0.8408        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.945      0.893       0.95      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10        15G     0.3555     0.2442     0.8256        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94      0.896      0.949      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.9G     0.3572     0.2509     0.8368        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.945      0.895      0.949      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      14.9G     0.3804     0.2563     0.8346         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.903      0.949       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      14.9G     0.3835      0.267     0.8318         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.899      0.948      0.815

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10        15G     0.4049     0.2751     0.8348        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.946      0.887       0.95      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10        15G     0.4375     0.2882     0.8372         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.924      0.895      0.949      0.816

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        15G     0.5142     0.3154     0.8795        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.927      0.898      0.951      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        15G     0.5171     0.3301     0.8704        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.897       0.95       0.82

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.1G     0.6157     0.3821     0.9382        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.939        0.9      0.952       0.82

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_11_finetune/weights/last.pt, 131.3MB
Optimizer stripped from runs/detect/step_11_finetune/weights/best.pt, 131.3MB

Validating runs/detect/step_11_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 0 gradients, 124.6 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.903      0.949       0.82
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 0 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5264.4±896.4 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.936      0.901       0.95      0.819
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_11_post_val
After fine tuning mAP=0.818715380662013
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.14766719382862217
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 74,160 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5427.5±1166.9 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.871      0.944      0.809
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_12_pre_val
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 13: MACs=61.8488912 G, #Params=32.436843 M, mAP=0.8090287265949643, speed up=1.3375568226839933
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_12_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_12_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4055.7±1294.9 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 878.0±228.5 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_12_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_12_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10        15G     0.4056     0.2754     0.8413        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.936      0.875      0.945      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10        15G      0.367     0.2491     0.8286        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.935      0.888      0.948      0.821

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.9G     0.3679     0.2586     0.8393        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.943      0.881      0.947      0.826

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.1G     0.3758     0.2559     0.8318         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.949      0.883      0.947      0.822

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.1G     0.3951     0.2704     0.8339         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.885      0.947      0.821

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.1G     0.4162     0.2774     0.8396        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.892       0.95      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      14.9G     0.4546     0.2918     0.8406         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.936      0.901      0.951      0.815

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        15G     0.4872     0.3136     0.8662        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94      0.903      0.952      0.814

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        15G     0.5297     0.3309     0.8719        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.941      0.897      0.947      0.818

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      14.9G      0.608     0.3775     0.9298        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.939      0.895      0.948      0.819

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_12_finetune/weights/last.pt, 130.3MB
Optimizer stripped from runs/detect/step_12_finetune/weights/best.pt, 130.3MB

Validating runs/detect/step_12_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.943      0.881      0.947      0.827
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4576.3±660.3 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.944      0.882      0.946      0.823
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_12_post_val
After fine tuning mAP=0.8229267970026074
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.14897095513156428
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 74,160 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5147.7±1080.5 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.942        0.9      0.949      0.815
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_13_pre_val
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 14: MACs=61.8488912 G, #Params=32.436843 M, mAP=0.8153406959934129, speed up=1.3375568226839933
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_13_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_13_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4073.6±1476.9 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 909.8±193.4 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_13_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_13_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      14.9G     0.3599     0.2503     0.8299        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94      0.898      0.948      0.821

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      14.9G     0.3407     0.2283     0.8201        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.902      0.951      0.826

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.8G     0.3487     0.2399     0.8281        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.906      0.951      0.828

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10        15G     0.3497     0.2389     0.8248         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.938      0.902      0.949      0.827

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10        15G     0.3522     0.2418     0.8247         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.95      0.895      0.953      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10        15G     0.3821     0.2589     0.8274        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.948      0.894      0.951      0.819

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      14.9G     0.4137     0.2715     0.8289         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.952       0.89      0.949      0.824

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        15G     0.4671     0.2988     0.8616        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.954      0.895       0.95      0.822

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        15G     0.4944     0.3161     0.8602        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.949      0.901       0.95      0.822

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      14.8G     0.5943     0.3612     0.9235        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.95      0.901      0.951      0.826

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_13_finetune/weights/last.pt, 130.3MB
Optimizer stripped from runs/detect/step_13_finetune/weights/best.pt, 130.3MB

Validating runs/detect/step_13_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.934      0.906      0.951      0.828
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4735.7±1747.0 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.931      0.909      0.949      0.823
Speed: 0.1ms preprocess, 6.2ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_13_post_val
After fine tuning mAP=0.82278884829967
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
0.14964931342467439
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 74,160 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4596.6±1966.1 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.945      0.904      0.949      0.819
Speed: 0.2ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_14_pre_val
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 15: MACs=61.8488912 G, #Params=32.436843 M, mAP=0.8194257834184517, speed up=1.3375568226839933
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_14_finetune, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_14_finetune, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4126.6±1397.7 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco12
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 887.1±278.1 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/
Plotting labels to runs/detect/step_14_finetune/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_14_finetune
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      14.9G      0.332     0.2378     0.8243        121        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.947      0.901      0.951      0.832

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      14.9G     0.3191     0.2212     0.8139        113        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.913      0.952      0.832

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.8G     0.3273     0.2297     0.8235        118        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.937      0.909       0.95      0.829

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      14.9G     0.3607     0.2413     0.8252         68        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94       0.91       0.95      0.823

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10        15G     0.3739     0.2451     0.8254         95        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.945      0.907      0.949      0.822

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10        15G     0.3695     0.2499     0.8252        122        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.933      0.907      0.944      0.823

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      14.9G     0.4027     0.2608     0.8271         75        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929       0.94      0.895      0.948      0.825

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10        15G     0.4442     0.2861     0.8574        142        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.947      0.896       0.95       0.83

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      14.9G     0.4736     0.2997     0.8551        104        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.946      0.903       0.95       0.83

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      14.8G     0.5683     0.3475     0.9165        164        640: 100%|█████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.946      0.905      0.952      0.829

10 epochs completed in 0.007 hours.
Optimizer stripped from runs/detect/step_14_finetune/weights/last.pt, 130.3MB
Optimizer stripped from runs/detect/step_14_finetune/weights/best.pt, 130.3MB

Validating runs/detect/step_14_finetune/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.947      0.901      0.951      0.832
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3499.3±1609.6 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.951      0.897       0.95      0.829
Speed: 0.2ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_14_post_val
After fine tuning mAP=0.8288516438034145
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CPU (Intel Core(TM) i9-14900KS)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs

PyTorch: starting from 'runs/detect/step_14_finetune/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (124.2 MB)

ONNX: starting export with onnx 1.17.0 opset 10...
W0202 14:45:52.878000 36862 site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 10 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
The model version conversion is not supported by the onnxscript version converter and fallback is enabled. The model will be converted using the onnx C API (target version: 10).
Failed to convert the model to the target version 10 using the ONNX C API. The model was not modified
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 127, in call
    converted_proto = _c_api_utils.call_onnx_api(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/_c_api_utils.py", line 65, in call_onnx_api
    result = func(proto)
             ^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 122, in _partial_convert_version
    return onnx.version_converter.convert_version(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnx/version_converter.py", line 38, in convert_version
    converted_model_str = C.convert_version(model_str, target_version)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: /github/workspace/onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion `false` failed: No Adapter To Version $17 for Resize
Applied 1 of general pattern rewrite rules.
ONNX: slimming with onnxslim 0.1.59...
ONNX: export success ✅ 2.8s, saved as 'runs/detect/step_14_finetune/weights/best.onnx' (123.8 MB)

Export complete (3.3s)
Results saved to /home/nathan/Developer/FasterAI-Labs/gh/fasterai/nbs/tutorials/prune/runs/detect/step_14_finetune/weights
Predict:         yolo predict task=detect model=runs/detect/step_14_finetune/weights/best.onnx imgsz=640  
Validate:        yolo val task=detect model=runs/detect/step_14_finetune/weights/best.onnx imgsz=640 data=/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/ultralytics/cfg/datasets/coco128.yaml  
Visualize:       https://netron.app

Post-Training Checks

model = YOLO('runs/detect/step_14_finetune/weights/best.pt')
example_inputs = torch.randn(1, 3, 640, 640).to(model.device)
base_macs, base_nparams = tp.utils.count_ops_and_params(model.model, example_inputs); base_macs, base_nparams
(61848891200.0, 32436843)
results = model.val(
                data='coco128.yaml',
                batch=1,
                imgsz=640,
                verbose=False,
            )
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5046.2±1881.0 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95):
                   all        128        929      0.945      0.901      0.951      0.834
Speed: 0.1ms preprocess, 7.3ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/val8
results
ultralytics.utils.metrics.DetMetrics object with attributes:

ap_class_index: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 11, 13, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 79])
box: ultralytics.utils.metrics.Metric object
confusion_matrix: <ultralytics.utils.metrics.ConfusionMatrix object>
curves: ['Precision-Recall(B)', 'F1-Confidence(B)', 'Precision-Confidence(B)', 'Recall-Confidence(B)']
curves_results: [[array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[          1,           1,           1, ...,    0.013817,   0.0069086,           0],
       [          1,           1,           1, ...,   0.0010725,  0.00053625,           0],
       [          1,           1,           1, ...,   0.0010042,  0.00050212,           0],
       ...,
       [          1,           1,           1, ...,           1,           1,           0],
       [          1,           1,           1, ...,           1,           1,           0],
       [          1,           1,           1, ...,           1,           1,           0]]), 'Recall', 'Precision'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.45548,     0.45548,     0.54119, ...,           0,           0,           0],
       [    0.16129,     0.16129,     0.19113, ...,           0,           0,           0],
       [    0.17494,     0.17494,      0.2234, ...,           0,           0,           0],
       ...,
       [    0.66667,     0.66667,     0.66667, ...,           0,           0,           0],
       [    0.72414,     0.72414,     0.76533, ...,           0,           0,           0],
       [    0.66667,     0.66667,     0.72791, ...,           0,           0,           0]]), 'Confidence', 'F1'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.29889,     0.29889,     0.37856, ...,           1,           1,           1],
       [   0.089286,    0.089286,     0.10794, ...,           1,           1,           1],
       [   0.098143,    0.098143,     0.12971, ...,           1,           1,           1],
       ...,
       [        0.5,         0.5,         0.5, ...,           1,           1,           1],
       [    0.56757,     0.56757,     0.61987, ...,           1,           1,           1],
       [        0.5,         0.5,     0.57221, ...,           1,           1,           1]]), 'Confidence', 'Precision'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.95669,     0.95669,     0.94882, ...,           0,           0,           0],
       [    0.83333,     0.83333,     0.83333, ...,           0,           0,           0],
       [    0.80435,     0.80435,     0.80435, ...,           0,           0,           0],
       ...,
       [          1,           1,           1, ...,           0,           0,           0],
       [          1,           1,           1, ...,           0,           0,           0],
       [          1,           1,           1, ...,           0,           0,           0]]), 'Confidence', 'Recall']]
fitness: np.float64(0.8457210566124942)
keys: ['metrics/precision(B)', 'metrics/recall(B)', 'metrics/mAP50(B)', 'metrics/mAP50-95(B)']
maps: array([    0.77751,     0.67418,     0.44381,     0.95884,     0.94651,     0.83433,     0.95942,       0.787,     0.63833,     0.32456,     0.83402,       0.825,     0.83402,     0.91626,     0.86562,       0.995,     0.97732,       0.995,     0.83402,     0.83402,     0.94973,       0.995,       0.995,     0.97047,
            0.7335,     0.83796,     0.70183,     0.79891,     0.94059,     0.83722,       0.796,       0.744,     0.39946,     0.65381,      0.4955,     0.43303,      0.8272,     0.83402,     0.73649,     0.67519,     0.69553,     0.80411,     0.86789,     0.66859,     0.74391,     0.81901,       0.995,     0.83402,
             0.995,     0.82905,     0.78665,     0.83301,       0.995,     0.97747,     0.98664,       0.995,     0.81409,     0.96763,     0.90945,       0.995,     0.91061,       0.995,       0.995,     0.93349,     0.70237,     0.78882,     0.83402,     0.73239,      0.9534,      0.9395,     0.83402,     0.88835,
             0.978,     0.62447,     0.90482,     0.94783,      0.8955,     0.94706,     0.83402,     0.96115])
names: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
nt_per_class: array([254,   6,  46,   5,   6,   7,   3,  12,   6,  14,   0,   2,   0,   9,  16,   4,   9,   2,   0,   0,  17,   1,   4,   9,   6,  18,  19,   7,   4,   5,   1,   7,   6,  10,   4,   7,   5,   0,   7,  18,  16,  36,   6,  16,  22,  28,   1,   0,   2,   4,  11,  24,   2,   5,  14,   4,  35,   6,  14,   3,  13,   2,
         2,   3,   2,   8,   0,   8,   3,   5,   0,   6,   5,  29,   9,   2,   1,  21,   0,   5])
nt_per_image: array([61,  3, 12,  4,  5,  5,  3,  5,  2,  4,  0,  2,  0,  5,  2,  4,  9,  1,  0,  0,  4,  1,  2,  4,  4,  4,  9,  6,  2,  5,  1,  2,  6,  2,  4,  4,  3,  0,  5,  6,  5, 10,  6,  7,  5,  9,  1,  0,  2,  1,  4,  3,  1,  5,  2,  4,  9,  5,  9,  3, 10,  2,  2,  2,  2,  5,  0,  5,  3,  5,  0,  4,  5,  6,  8,  2,  1,  6,
        0,  2])
results_dict: {'metrics/precision(B)': np.float64(0.9451528260552163), 'metrics/recall(B)': np.float64(0.9006474438353611), 'metrics/mAP50(B)': np.float64(0.9510379206922379), 'metrics/mAP50-95(B)': np.float64(0.834019182825856), 'fitness': np.float64(0.8457210566124942)}
save_dir: Path('runs/detect/val8')
speed: {'preprocess': 0.1319386875024975, 'inference': 7.261490328147602, 'loss': 0.0029867266988503616, 'postprocess': 0.37337545310833775}
stats: {'tp': [], 'conf': [], 'pred_cls': [], 'target_cls': [], 'target_img': []}
task: 'detect'
model.export(format = 'onnx', half = True)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CPU (Intel Core(TM) i9-14900KS)
WARNING ⚠️ half=True only compatible with GPU export, i.e. use device=0

PyTorch: starting from 'runs/detect/step_14_finetune/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (124.2 MB)

ONNX: starting export with onnx 1.17.0 opset 10...
W0202 14:50:40.775000 36862 site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 10 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
The model version conversion is not supported by the onnxscript version converter and fallback is enabled. The model will be converted using the onnx C API (target version: 10).
Failed to convert the model to the target version 10 using the ONNX C API. The model was not modified
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 127, in call
    converted_proto = _c_api_utils.call_onnx_api(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/_c_api_utils.py", line 65, in call_onnx_api
    result = func(proto)
             ^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 122, in _partial_convert_version
    return onnx.version_converter.convert_version(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnx/version_converter.py", line 38, in convert_version
    converted_model_str = C.convert_version(model_str, target_version)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: /github/workspace/onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion `false` failed: No Adapter To Version $17 for Resize
Applied 1 of general pattern rewrite rules.
ONNX: slimming with onnxslim 0.1.59...
ONNX: export success ✅ 2.3s, saved as 'runs/detect/step_14_finetune/weights/best.onnx' (123.8 MB)

Export complete (2.6s)
Results saved to /home/nathan/Developer/FasterAI-Labs/gh/fasterai/nbs/tutorials/prune/runs/detect/step_14_finetune/weights
Predict:         yolo predict task=detect model=runs/detect/step_14_finetune/weights/best.onnx imgsz=640  
Validate:        yolo val task=detect model=runs/detect/step_14_finetune/weights/best.onnx imgsz=640 data=/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/ultralytics/cfg/datasets/coco128.yaml  
Visualize:       https://netron.app
'runs/detect/step_14_finetune/weights/best.onnx'