YOLOV8

YOLOV8

Helpers

Training

class Args(argparse.Namespace):
  model = 'yolov8l.pt'
  cfg = 'default.yaml'
  iterative_steps = 10
  target_prune_rate = 0.15
  max_map_drop = 0.2
  sched = Schedule(partial(sched_onecycle,  α=10, β=4))

args=Args()
prune(args)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4049.2±1625.9 MB/s, size: 51.0 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.731      0.768      0.828       0.66
Speed: 0.7ms preprocess, 3.1ms inference, 0.0ms loss, 2.1ms postprocess per image
Results saved to runs/detect/val8
Before Pruning: MACs= 82.72641 G, #Params= 43.69152 M, mAP= 0.66035
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=train7, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/train7, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3969.9±1494.9 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 505.5±201.7 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/train7/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/train7
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      17.6G     0.8369     0.7191      1.072        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.774      0.763      0.839      0.674

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.1G     0.8351      0.665      1.061        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.826      0.783       0.85      0.689

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.2G     0.8322     0.6222      1.066        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.858      0.794       0.86      0.704

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.1G     0.8023     0.5615      1.029         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.896      0.793       0.87      0.717

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.3G     0.7755      0.521      1.012         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.879      0.824       0.89      0.731

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.3G     0.7552     0.5039      1.011        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.869       0.84      0.892      0.738

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      16.8G     0.7342     0.4821     0.9817         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.885      0.835      0.896      0.749

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.2G     0.7389     0.4766     0.9989        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.884      0.855      0.904      0.762

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.2G     0.7197     0.4778     0.9785        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.875      0.866      0.909      0.767

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.2G     0.7149      0.457      1.007        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.882      0.867      0.911      0.768

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/train7/weights/last.pt, 175.3MB
Optimizer stripped from runs/detect/train7/weights/best.pt, 175.3MB

Validating runs/detect/train7/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.883      0.867      0.911      0.768
Speed: 0.1ms preprocess, 2.7ms inference, 0.0ms loss, 0.3ms postprocess per image
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,668,288 parameters, 0 gradients, 165.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5337.5±708.2 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.901      0.849      0.904      0.769
Speed: 0.1ms preprocess, 5.4ms inference, 0.0ms loss, 0.5ms postprocess per image
Results saved to runs/detect/baseline_val4
Before Pruning: MACs= 82.72641 G, #Params= 43.69152 M, mAP= 0.76904
Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 1: progress=0.018, ratio=0.003
After Pruning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,043,386 parameters, 74,176 gradients, 162.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5560.8±1327.8 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.878      0.862      0.904      0.746
Speed: 0.2ms preprocess, 6.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_0_pre_val2
After post-pruning Validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 1: MACs=81.4709528 G, #Params=43.066447 M, mAP=0.7464191064783372, speed up=1.0154098308274602
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_0_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_0_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3796.3±1197.1 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1819.0±382.7 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_0_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_0_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      17.4G       0.67     0.4225     0.9631        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.907      0.846      0.908      0.755

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.4G     0.6359     0.3913      0.947        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.888      0.861      0.914      0.757

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.3G     0.6677      0.427     0.9806        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.896      0.861      0.914      0.761

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.4G     0.6512     0.3957     0.9469         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.907      0.858      0.916      0.776

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.6G     0.6385     0.3909       0.94         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.852      0.919      0.779

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.3G     0.6406     0.4071     0.9522        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.942      0.846      0.917      0.781

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.4G     0.6228     0.3905     0.9324         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.877      0.883       0.92      0.788

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.4G     0.6583     0.4037     0.9571        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.866      0.923      0.793

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.4G     0.6465     0.4069      0.941        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.92      0.875      0.931      0.798

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.4G     0.6573     0.4086     0.9788        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.884      0.932      0.799

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/step_0_finetune2/weights/last.pt, 172.8MB
Optimizer stripped from runs/detect/step_0_finetune2/weights/best.pt, 172.8MB

Validating runs/detect/step_0_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,043,386 parameters, 0 gradients, 162.6 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.884      0.932      0.799
Speed: 0.1ms preprocess, 3.1ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 43,043,386 parameters, 0 gradients, 162.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4856.6±2095.6 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.916       0.87      0.922      0.791
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_0_post_val2
After fine tuning mAP=0.7912829910872162
After post fine-tuning validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 2: progress=0.048, ratio=0.007
After Pruning
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 74,160 gradients, 158.8 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5539.0±1668.1 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.845      0.912      0.769
Speed: 0.1ms preprocess, 6.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_1_pre_val2
After post-pruning Validation
Model Conv2d(3, 63, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 2: MACs=79.5541908 G, #Params=42.117503 M, mAP=0.7685751155559024, speed up=1.0398749024796818
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_1_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_1_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3717.7±1437.5 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1717.9±457.2 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_1_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_1_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10        17G     0.6016     0.3791     0.9319        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.859      0.921      0.783

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.1G     0.5765     0.3537     0.9187        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.864      0.918      0.786

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.1G     0.5879     0.3755     0.9353        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.915      0.868      0.919      0.791

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.2G     0.5637     0.3453     0.9177         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93       0.87      0.932      0.794

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      17.2G     0.5691     0.3553     0.9072         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.928      0.869      0.928      0.794

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.2G     0.5736     0.3496     0.9185        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.872      0.924      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.4G     0.5726     0.3525     0.9006         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.873      0.924      0.794

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      17.3G     0.6045     0.3704     0.9303        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.927      0.882      0.932        0.8

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      17.3G     0.6179     0.3961     0.9203        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.938      0.883      0.932      0.804

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.1G     0.6393      0.416     0.9573        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.941      0.883      0.933      0.804

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/step_1_finetune2/weights/last.pt, 169.0MB
Optimizer stripped from runs/detect/step_1_finetune2/weights/best.pt, 169.0MB

Validating runs/detect/step_1_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 0 gradients, 158.8 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.941      0.883      0.933      0.804
Speed: 0.1ms preprocess, 3.1ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 42,094,706 parameters, 0 gradients, 158.8 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5405.8±1076.0 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.931      0.882      0.931      0.795
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_1_post_val2
After fine tuning mAP=0.7950947724666012
After post fine-tuning validation
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 3: progress=0.119, ratio=0.018
After Pruning
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,324,469 parameters, 74,160 gradients, 152.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5472.8±1324.6 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.858      0.851      0.907      0.743
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_2_pre_val2
After post-pruning Validation
Model Conv2d(3, 62, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 3: MACs=76.3708784 G, #Params=40.34678 M, mAP=0.7432541366864699, speed up=1.0832192601833424
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_2_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_2_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4279.1±1583.2 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 975.8±228.6 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_2_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_2_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      16.8G     0.6389     0.3964     0.9263        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.922      0.836      0.912      0.764

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      17.3G     0.5608      0.361      0.903        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.852      0.925       0.78

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      17.3G     0.5679      0.364     0.9166        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.903      0.876      0.931      0.783

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      17.1G      0.549      0.362     0.8975         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.905      0.885      0.934       0.79

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      16.9G     0.5402     0.3396     0.8914         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.932      0.873      0.929      0.793

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      17.1G     0.5511     0.3452     0.9006        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.925      0.872      0.933      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      17.1G     0.5463     0.3546     0.8866         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.92      0.882      0.932      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      16.9G     0.5963     0.3718     0.9195        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.894      0.936      0.801

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      16.9G     0.6017     0.3778     0.9098        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.896      0.937      0.806

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      17.3G     0.6401     0.4083      0.954        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93      0.892      0.937      0.808

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/step_2_finetune2/weights/last.pt, 161.9MB
Optimizer stripped from runs/detect/step_2_finetune2/weights/best.pt, 161.9MB

Validating runs/detect/step_2_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,324,469 parameters, 0 gradients, 152.5 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93      0.892      0.937      0.808
Speed: 0.1ms preprocess, 3.0ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 40,324,469 parameters, 0 gradients, 152.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4201.8±2216.9 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.927      0.891      0.936        0.8
Speed: 0.1ms preprocess, 7.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_2_post_val2
After fine tuning mAP=0.7996752102772763
After post fine-tuning validation
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 4: progress=0.270, ratio=0.040
After Pruning
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 74,160 gradients, 143.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4881.8±1731.9 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.875      0.805      0.892      0.702
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_3_pre_val2
After post-pruning Validation
Model Conv2d(3, 61, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 4: MACs=71.732976 G, #Params=37.730325 M, mAP=0.7020598286629811, speed up=1.1532549046898597
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_3_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_3_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4040.0±1479.4 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 898.4±215.3 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_3_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_3_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10        16G     0.6694     0.4281     0.9392        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.884      0.856      0.915      0.743

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      16.3G      0.596      0.378     0.9048        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.899      0.869      0.924      0.762

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      16.2G     0.5892      0.389     0.9174        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.901      0.868      0.922      0.775

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      16.3G     0.5714     0.3688     0.8989         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.923      0.875      0.933      0.779

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      16.3G     0.5768     0.3685     0.8988         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93      0.875      0.935      0.788

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      16.3G     0.5769     0.3674     0.8972        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.902      0.892      0.937       0.79

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      16.3G     0.5726     0.3653     0.8875         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.876      0.937      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      16.4G     0.6152     0.3919     0.9235        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.918      0.883      0.939      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      16.4G     0.6269     0.3936       0.92        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.922      0.886      0.939      0.805

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      16.2G     0.6646     0.4099     0.9612        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.886      0.941      0.809

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/step_3_finetune2/weights/last.pt, 151.5MB
Optimizer stripped from runs/detect/step_3_finetune2/weights/best.pt, 151.5MB

Validating runs/detect/step_3_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 0 gradients, 143.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.886      0.941      0.809
Speed: 0.1ms preprocess, 2.9ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 37,708,749 parameters, 0 gradients, 143.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3800.7±1771.4 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.916      0.888      0.941      0.808
Speed: 0.1ms preprocess, 6.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_3_post_val2
After fine tuning mAP=0.8076550755729582
After post fine-tuning validation
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 5: progress=0.501, ratio=0.075
After Pruning
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,132,671 parameters, 74,160 gradients, 133.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5237.1±1333.4 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.786      0.737       0.83      0.664
Speed: 0.1ms preprocess, 6.5ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_4_pre_val2
After post-pruning Validation
Model Conv2d(3, 59, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 5: MACs=66.7424992 G, #Params=35.153479 M, mAP=0.6635248706814774, speed up=1.2394861953266503
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_4_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_4_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3910.5±1562.2 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 745.1±159.7 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_4_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_4_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.6G     0.7302      0.482      0.966        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.844      0.814      0.887      0.718

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.5G     0.6487     0.4256     0.9334        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.891      0.835       0.91      0.753

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.7G     0.6402     0.4361     0.9413        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.889      0.848      0.919      0.761

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.7G     0.6214     0.3973     0.9185         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.893      0.862      0.924      0.776

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.7G     0.5974     0.3845     0.9032         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914       0.86      0.929      0.779

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.7G     0.6027     0.3936     0.9126        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926       0.86      0.932      0.786

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.8G     0.5974     0.3946     0.8942         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.922      0.872      0.934      0.791

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.8G     0.6442     0.4018     0.9322        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.883      0.935        0.8

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.8G      0.659     0.4155     0.9267        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.886      0.936      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      16.3G     0.6831     0.4327     0.9751        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.909      0.897      0.937      0.801

10 epochs completed in 0.010 hours.
Optimizer stripped from runs/detect/step_4_finetune2/weights/last.pt, 141.2MB
Optimizer stripped from runs/detect/step_4_finetune2/weights/best.pt, 141.2MB

Validating runs/detect/step_4_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,132,671 parameters, 0 gradients, 133.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.886      0.936      0.802
Speed: 0.1ms preprocess, 2.7ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 35,132,671 parameters, 0 gradients, 133.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4520.2±1803.1 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.912      0.886      0.938        0.8
Speed: 0.1ms preprocess, 6.5ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_4_post_val2
After fine tuning mAP=0.7996035449784826
After post fine-tuning validation
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 6: progress=0.733, ratio=0.110
After Pruning
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 74,160 gradients, 128.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4015.0±1353.0 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.918      0.822      0.908      0.743
Speed: 0.2ms preprocess, 6.0ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_5_pre_val2
After post-pruning Validation
Model Conv2d(3, 56, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 6: MACs=64.3900056 G, #Params=33.768007 M, mAP=0.7431841358762444, speed up=1.2847709148203583
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_5_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_5_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4044.3±1635.3 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 726.9±171.1 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_5_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_5_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.3G     0.6333     0.4011     0.9294        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.828      0.922      0.757

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.3G     0.5444     0.3673     0.8873        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93      0.842      0.925      0.769

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      15.2G     0.5664     0.3835     0.9134        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.917      0.867      0.929      0.771

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      15.4G     0.5632     0.3668     0.8936         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.915      0.867       0.93      0.782

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.4G     0.5594     0.3643     0.8994         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.924      0.856      0.929      0.792

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.4G     0.5635      0.359     0.8999        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.923      0.857       0.93      0.793

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.3G     0.5725     0.3679     0.8946         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.932      0.858      0.933      0.794

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.3G     0.6254     0.3951     0.9293        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.925      0.863      0.932      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.3G      0.642     0.4066     0.9224        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.871      0.906      0.932      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.2G     0.6799     0.4366     0.9771        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.896      0.894      0.932      0.797

10 epochs completed in 0.018 hours.
Optimizer stripped from runs/detect/step_5_finetune2/weights/last.pt, 135.6MB
Optimizer stripped from runs/detect/step_5_finetune2/weights/best.pt, 135.6MB

Validating runs/detect/step_5_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 0 gradients, 128.5 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.871      0.906      0.932      0.797
Speed: 0.1ms preprocess, 3.4ms inference, 0.0ms loss, 1.5ms postprocess per image
After fine-tuning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 33,747,610 parameters, 0 gradients, 128.5 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3665.2±391.2 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.933      0.857      0.934      0.795
Speed: 1.5ms preprocess, 15.6ms inference, 0.0ms loss, 3.4ms postprocess per image
Results saved to runs/detect/step_5_post_val2
After fine tuning mAP=0.7951189977994946
After post fine-tuning validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 7: progress=0.883, ratio=0.132
After Pruning
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,913,682 parameters, 74,160 gradients, 125.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1738.3±815.1 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.824      0.863      0.908      0.742
Speed: 1.1ms preprocess, 13.4ms inference, 0.0ms loss, 2.8ms postprocess per image
Results saved to runs/detect/step_6_pre_val2
After post-pruning Validation
Model Conv2d(3, 55, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 7: MACs=62.7046164 G, #Params=32.933815 M, mAP=0.7416030070446816, speed up=1.319303284981104
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_6_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_6_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1812.0±691.5 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 881.6±250.0 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_6_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_6_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.1G     0.5837     0.3747     0.9064        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.895      0.851      0.923       0.76

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.1G     0.5111     0.3316     0.8736        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.905       0.87      0.929      0.777

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10        15G     0.5221     0.3494     0.8919        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.909      0.875      0.932      0.786

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10        15G      0.531     0.3318     0.8828         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.911      0.874      0.927      0.785

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      15.1G     0.5416     0.3462     0.8843         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.886      0.888       0.93      0.786

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.1G     0.5524      0.354       0.89        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.893      0.877      0.926      0.783

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      15.1G     0.5693     0.3642     0.8824         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.907      0.869      0.926      0.786

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.1G     0.6151     0.3821     0.9231        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.906      0.878      0.929      0.788

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.1G     0.6315     0.4039     0.9118        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.915      0.883      0.932      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10        15G     0.6669     0.4226     0.9674        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.884      0.933      0.795

10 epochs completed in 0.015 hours.
Optimizer stripped from runs/detect/step_6_finetune2/weights/last.pt, 132.3MB
Optimizer stripped from runs/detect/step_6_finetune2/weights/best.pt, 132.3MB

Validating runs/detect/step_6_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,913,682 parameters, 0 gradients, 125.2 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.915      0.883      0.932      0.796
Speed: 0.1ms preprocess, 3.0ms inference, 0.0ms loss, 1.1ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,913,682 parameters, 0 gradients, 125.2 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 3042.0±594.6 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.915      0.877      0.931      0.794
Speed: 0.5ms preprocess, 9.4ms inference, 0.0ms loss, 1.6ms postprocess per image
Results saved to runs/detect/step_6_post_val2
After fine tuning mAP=0.7942548824738299
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 8: progress=0.955, ratio=0.143
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 74,160 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1360.5±623.7 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.916      0.867      0.927      0.789
Speed: 0.7ms preprocess, 9.3ms inference, 0.0ms loss, 1.6ms postprocess per image
Results saved to runs/detect/step_7_pre_val2
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 8: MACs=62.4070664 G, #Params=32.689204 M, mAP=0.7892334700405261, speed up=1.325593577332454
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_7_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_7_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1005.4±526.3 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 626.8±111.3 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_7_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_7_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      14.9G      0.495     0.3205       0.88        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.923      0.874      0.929      0.799

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      15.1G     0.4323     0.2908     0.8485        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.917      0.884      0.936      0.794

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10        15G     0.4617     0.3107     0.8715        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.908      0.886      0.933      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10        15G     0.4587     0.3004     0.8575         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.878       0.93      0.795

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      14.9G     0.4739     0.3132     0.8601         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.878      0.932      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      14.9G     0.4952     0.3178     0.8655        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.934      0.864      0.929      0.795

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10        15G     0.4984      0.326     0.8585         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.92      0.877       0.93      0.796

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      15.1G     0.5569     0.3589      0.894        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.872      0.929      0.798

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      15.1G     0.5973     0.3794      0.897        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.873      0.934      0.804

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10        15G     0.6558     0.4162     0.9558        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.878      0.936      0.805

10 epochs completed in 0.012 hours.
Optimizer stripped from runs/detect/step_7_finetune2/weights/last.pt, 131.3MB
Optimizer stripped from runs/detect/step_7_finetune2/weights/best.pt, 131.3MB

Validating runs/detect/step_7_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 0 gradients, 124.6 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.929      0.878      0.936      0.805
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,669,140 parameters, 0 gradients, 124.6 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4798.0±2031.8 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.937      0.883      0.937      0.802
Speed: 0.2ms preprocess, 6.1ms inference, 0.0ms loss, 0.5ms postprocess per image
Results saved to runs/detect/step_7_post_val2
After fine tuning mAP=0.8021166946231042
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 9: progress=0.984, ratio=0.148
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 74,160 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4707.7±1115.9 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.931      0.863      0.921      0.768
Speed: 0.1ms preprocess, 6.1ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_8_pre_val2
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 9: MACs=61.8488912 G, #Params=32.436843 M, mAP=0.7680307574283493, speed up=1.3375568226839933
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_8_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_8_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4035.8±1487.2 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 1631.0±420.7 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_8_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_8_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      14.9G     0.4943     0.3212     0.8714        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929       0.93      0.869      0.926      0.789

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      14.9G     0.4371     0.2908     0.8475        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.935      0.869      0.931      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.9G      0.443     0.2951      0.864        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.933      0.873      0.934      0.801

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      14.9G     0.4433      0.295     0.8514         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.909      0.892      0.933      0.801

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10      14.9G     0.4481     0.2939      0.852         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.912      0.896      0.932      0.797

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      14.9G     0.4641     0.3056     0.8523        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.917       0.89      0.935      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      14.9G     0.4891     0.3107      0.858         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.931      0.886      0.937      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      14.9G      0.532      0.338     0.8835        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.942      0.887      0.936      0.802

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10        15G     0.5758     0.3629     0.8903        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.941      0.893      0.936      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      14.9G     0.6455     0.3983     0.9429        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.942      0.893      0.938      0.808

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_8_finetune2/weights/last.pt, 130.3MB
Optimizer stripped from runs/detect/step_8_finetune2/weights/best.pt, 130.3MB

Validating runs/detect/step_8_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.942      0.893      0.938      0.808
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 2456.1±440.7 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.933      0.893      0.935      0.806
Speed: 0.1ms preprocess, 6.2ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_8_post_val2
After fine tuning mAP=0.8062525404490082
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruning step 10: progress=0.996, ratio=0.149
After Pruning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 74,160 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5303.3±1442.4 MB/s, size: 44.7 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.934      0.895      0.936      0.806
Speed: 0.1ms preprocess, 6.2ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_9_pre_val2
After post-pruning Validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
After pruning iter 10: MACs=61.8488912 G, #Params=32.436843 M, mAP=0.8062440401624619, speed up=1.3375568226839933
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
engine/trainer: agnostic_nms=False, amp=False, augment=False, auto_augment=randaugment, batch=16, bgr=0.0, box=7.5, cache=False, cfg=None, classes=None, close_mosaic=10, cls=0.5, conf=None, copy_paste=0.0, copy_paste_mode=flip, cos_lr=False, cutmix=0.0, data=coco128.yaml, degrees=0.0, deterministic=True, device=None, dfl=1.5, dnn=False, dropout=0.0, dynamic=False, embed=None, epochs=10, erasing=0.4, exist_ok=False, fliplr=0.5, flipud=0.0, format=torchscript, fraction=1.0, freeze=None, half=False, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, imgsz=640, int8=False, iou=0.7, keras=False, kobj=1.0, line_width=None, lr0=0.01, lrf=0.01, mask_ratio=4, max_det=300, mixup=0.0, mode=train, model=yolov8l.pt, momentum=0.937, mosaic=1.0, multi_scale=False, name=step_9_finetune2, nbs=64, nms=False, opset=None, optimize=False, optimizer=auto, overlap_mask=True, patience=100, perspective=0.0, plots=True, pose=12.0, pretrained=True, profile=False, project=None, rect=False, resume=False, retina_masks=False, save=True, save_conf=False, save_crop=False, save_dir=runs/detect/step_9_finetune2, save_frames=False, save_json=False, save_period=-1, save_txt=False, scale=0.5, seed=0, shear=0.0, show=False, show_boxes=True, show_conf=True, show_labels=True, simplify=True, single_cls=False, source=None, split=val, stream_buffer=False, task=detect, time=None, tracker=botsort.yaml, translate=0.1, val=True, verbose=False, vid_stride=1, visualize=False, warmup_bias_lr=0.1, warmup_epochs=3.0, warmup_momentum=0.8, weight_decay=0.0005, workers=8, workspace=None
Freezing layer 'model.22.dfl.conv.weight'
train: Fast image access ✅ (ping: 0.0±0.0 ms, read: 4503.1±1040.0 MB/s, size: 50.9 KB)
train: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/lab
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 795.8±192.5 MB/s, size: 52.5 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label
Plotting labels to runs/detect/step_9_finetune2/labels.jpg... 
optimizer: 'optimizer=auto' found, ignoring 'lr0=0.01' and 'momentum=0.937' and determining best 'optimizer', 'lr0' and 'momentum' automatically... 
optimizer: AdamW(lr=0.000119, momentum=0.9) with parameter groups 105 weight(decay=0.0), 112 weight(decay=0.0005), 111 bias(decay=0.0)
Image sizes 640 train, 640 val
Using 8 dataloader workers
Logging results to runs/detect/step_9_finetune2
Starting training for 10 epochs...
Closing dataloader mosaic

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       1/10      15.3G      0.424     0.2844     0.8499        121        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.937      0.892      0.939      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       2/10      14.9G     0.3993     0.2626     0.8333        113        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.923      0.896      0.942      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       3/10      14.9G     0.4118     0.2764     0.8534        118        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.914      0.899      0.941      0.808

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       4/10      14.9G     0.4239     0.2808     0.8413         68        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.926      0.892      0.937      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       5/10        15G     0.4537     0.2909     0.8466         95        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.942      0.891      0.935      0.805

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       6/10      15.1G     0.4596      0.299     0.8484        122        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.947      0.885      0.938      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       7/10      14.9G     0.4647     0.3001     0.8475         75        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.948      0.887       0.94      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       8/10      14.9G     0.5177     0.3237     0.8788        142        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.947      0.891      0.942      0.807

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
       9/10      14.9G     0.5476     0.3486     0.8788        104        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.946      0.891      0.942      0.811

      Epoch    GPU_mem   box_loss   cls_loss   dfl_loss  Instances       Size
      10/10      15.3G     0.6247     0.3905      0.942        164        640: 100%|██████████
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.944      0.889      0.941      0.811

10 epochs completed in 0.008 hours.
Optimizer stripped from runs/detect/step_9_finetune2/weights/last.pt, 130.3MB
Optimizer stripped from runs/detect/step_9_finetune2/weights/best.pt, 130.3MB

Validating runs/detect/step_9_finetune2/weights/best.pt...
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.946      0.891      0.942      0.811
Speed: 0.1ms preprocess, 2.5ms inference, 0.0ms loss, 0.3ms postprocess per image
After fine-tuning
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5168.4±959.1 MB/s, size: 53.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.937      0.892      0.939      0.806
Speed: 0.1ms preprocess, 6.3ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/step_9_post_val2
After fine tuning mAP=0.8059532806050649
After post fine-tuning validation
Model Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Pruner Conv2d(3, 54, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CPU (Intel Core(TM) i9-14900KS)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs

PyTorch: starting from 'runs/detect/step_9_finetune2/weights/best.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (124.2 MB)

ONNX: starting export with onnx 1.17.0 opset 10...
W0205 17:34:38.183000 260195 site-packages/torch/onnx/_internal/exporter/_compat.py:114] Setting ONNX exporter to use operator set version 18 because the requested opset_version 10 is a lower version than we have implementations for. Automatic version conversion will be performed, which may not be successful at converting to the requested version. If version conversion is unsuccessful, the opset version of the exported model will be kept at 18. Please consider setting opset_version >=18 to leverage latest ONNX features
The model version conversion is not supported by the onnxscript version converter and fallback is enabled. The model will be converted using the onnx C API (target version: 10).
Failed to convert the model to the target version 10 using the ONNX C API. The model was not modified
Traceback (most recent call last):
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 127, in call
    converted_proto = _c_api_utils.call_onnx_api(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/_c_api_utils.py", line 65, in call_onnx_api
    result = func(proto)
             ^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnxscript/version_converter/__init__.py", line 122, in _partial_convert_version
    return onnx.version_converter.convert_version(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/onnx/version_converter.py", line 38, in convert_version
    converted_model_str = C.convert_version(model_str, target_version)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: /github/workspace/onnx/version_converter/BaseConverter.h:70: adapter_lookup: Assertion `false` failed: No Adapter To Version $17 for Resize
Applied 1 of general pattern rewrite rules.
ONNX: slimming with onnxslim 0.1.59...
ONNX: export success ✅ 2.9s, saved as 'runs/detect/step_9_finetune2/weights/best.onnx' (123.8 MB)

Export complete (3.4s)
Results saved to /home/nathan/Developer/FasterAI-Labs/gh/fasterai/nbs/tutorials/prune/runs/detect/step_9_finetune2/weights
Predict:         yolo predict task=detect model=runs/detect/step_9_finetune2/weights/best.onnx imgsz=640  
Validate:        yolo val task=detect model=runs/detect/step_9_finetune2/weights/best.onnx imgsz=640 data=/home/nathan/miniconda3/envs/dev/lib/python3.12/site-packages/ultralytics/cfg/datasets/coco128.yaml  
Visualize:       https://netron.app

Post-Training Checks

model = YOLO('runs/detect/step_9_finetune2/weights/best.pt')
example_inputs = torch.randn(1, 3, 640, 640).to(model.device)
base_macs, base_nparams = tp.utils.count_ops_and_params(model.model, example_inputs); base_macs, base_nparams
(61848891200.0, 32436843)
results = model.val(
                data='coco128.yaml',
                batch=1,
                imgsz=640,
                verbose=False,
            )
Ultralytics 8.3.162 🚀 Python-3.12.11 torch-2.9.1+cu128 CUDA:0 (NVIDIA GeForce RTX 5090, 32109MiB)
YOLOv8l summary (fused): 121 layers, 32,416,863 parameters, 0 gradients, 123.4 GFLOPs
val: Fast image access ✅ (ping: 0.0±0.0 ms, read: 5274.3±1449.8 MB/s, size: 56.4 KB)
val: Scanning /home/nathan/Developer/FasterAI-Labs/Projects/ALX Systems/datasets/coco128/label

                 Class     Images  Instances      Box(P          R      mAP50  mAP50-95): 100%
                   all        128        929      0.949       0.89       0.94      0.813
Speed: 0.1ms preprocess, 5.9ms inference, 0.0ms loss, 0.4ms postprocess per image
Results saved to runs/detect/val10
results
ultralytics.utils.metrics.DetMetrics object with attributes:

ap_class_index: array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 11, 13, 14, 15, 16, 17, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 79])
box: ultralytics.utils.metrics.Metric object
confusion_matrix: <ultralytics.utils.metrics.ConfusionMatrix object>
curves: ['Precision-Recall(B)', 'F1-Confidence(B)', 'Precision-Confidence(B)', 'Recall-Confidence(B)']
curves_results: [[array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[          1,           1,           1, ...,    0.011347,   0.0056734,           0],
       [          1,           1,           1, ...,  0.00093844,  0.00046922,           0],
       [          1,           1,           1, ...,   0.0010056,   0.0005028,           0],
       ...,
       [          1,           1,           1, ...,           1,           1,           0],
       [          1,           1,           1, ...,           1,           1,           0],
       [          1,           1,           1, ...,           1,           1,           0]]), 'Recall', 'Precision'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.39068,     0.39068,     0.47539, ...,           0,           0,           0],
       [    0.14286,     0.14286,     0.18083, ...,           0,           0,           0],
       [      0.158,       0.158,     0.19265, ...,           0,           0,           0],
       ...,
       [    0.22222,     0.22222,     0.34654, ...,           0,           0,           0],
       [    0.58333,     0.58333,     0.70817, ...,           0,           0,           0],
       [    0.43478,     0.43478,     0.51359, ...,           0,           0,           0]]), 'Confidence', 'F1'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.24545,     0.24545,     0.31671, ...,           1,           1,           1],
       [   0.078125,    0.078125,     0.10142, ...,           1,           1,           1],
       [   0.087356,    0.087356,     0.10904, ...,           1,           1,           1],
       ...,
       [      0.125,       0.125,     0.20959, ...,           1,           1,           1],
       [    0.41176,     0.41176,     0.54819, ...,           1,           1,           1],
       [    0.27778,     0.27778,     0.34552, ...,           1,           1,           1]]), 'Confidence', 'Precision'], [array([          0,    0.001001,    0.002002,    0.003003,    0.004004,    0.005005,    0.006006,    0.007007,    0.008008,    0.009009,     0.01001,    0.011011,    0.012012,    0.013013,    0.014014,    0.015015,    0.016016,    0.017017,    0.018018,    0.019019,     0.02002,    0.021021,    0.022022,    0.023023,
          0.024024,    0.025025,    0.026026,    0.027027,    0.028028,    0.029029,     0.03003,    0.031031,    0.032032,    0.033033,    0.034034,    0.035035,    0.036036,    0.037037,    0.038038,    0.039039,     0.04004,    0.041041,    0.042042,    0.043043,    0.044044,    0.045045,    0.046046,    0.047047,
          0.048048,    0.049049,     0.05005,    0.051051,    0.052052,    0.053053,    0.054054,    0.055055,    0.056056,    0.057057,    0.058058,    0.059059,     0.06006,    0.061061,    0.062062,    0.063063,    0.064064,    0.065065,    0.066066,    0.067067,    0.068068,    0.069069,     0.07007,    0.071071,
          0.072072,    0.073073,    0.074074,    0.075075,    0.076076,    0.077077,    0.078078,    0.079079,     0.08008,    0.081081,    0.082082,    0.083083,    0.084084,    0.085085,    0.086086,    0.087087,    0.088088,    0.089089,     0.09009,    0.091091,    0.092092,    0.093093,    0.094094,    0.095095,
          0.096096,    0.097097,    0.098098,    0.099099,      0.1001,      0.1011,      0.1021,      0.1031,      0.1041,     0.10511,     0.10611,     0.10711,     0.10811,     0.10911,     0.11011,     0.11111,     0.11211,     0.11311,     0.11411,     0.11512,     0.11612,     0.11712,     0.11812,     0.11912,
           0.12012,     0.12112,     0.12212,     0.12312,     0.12412,     0.12513,     0.12613,     0.12713,     0.12813,     0.12913,     0.13013,     0.13113,     0.13213,     0.13313,     0.13413,     0.13514,     0.13614,     0.13714,     0.13814,     0.13914,     0.14014,     0.14114,     0.14214,     0.14314,
           0.14414,     0.14515,     0.14615,     0.14715,     0.14815,     0.14915,     0.15015,     0.15115,     0.15215,     0.15315,     0.15415,     0.15516,     0.15616,     0.15716,     0.15816,     0.15916,     0.16016,     0.16116,     0.16216,     0.16316,     0.16416,     0.16517,     0.16617,     0.16717,
           0.16817,     0.16917,     0.17017,     0.17117,     0.17217,     0.17317,     0.17417,     0.17518,     0.17618,     0.17718,     0.17818,     0.17918,     0.18018,     0.18118,     0.18218,     0.18318,     0.18418,     0.18519,     0.18619,     0.18719,     0.18819,     0.18919,     0.19019,     0.19119,
           0.19219,     0.19319,     0.19419,      0.1952,      0.1962,      0.1972,      0.1982,      0.1992,      0.2002,      0.2012,      0.2022,      0.2032,      0.2042,     0.20521,     0.20621,     0.20721,     0.20821,     0.20921,     0.21021,     0.21121,     0.21221,     0.21321,     0.21421,     0.21522,
           0.21622,     0.21722,     0.21822,     0.21922,     0.22022,     0.22122,     0.22222,     0.22322,     0.22422,     0.22523,     0.22623,     0.22723,     0.22823,     0.22923,     0.23023,     0.23123,     0.23223,     0.23323,     0.23423,     0.23524,     0.23624,     0.23724,     0.23824,     0.23924,
           0.24024,     0.24124,     0.24224,     0.24324,     0.24424,     0.24525,     0.24625,     0.24725,     0.24825,     0.24925,     0.25025,     0.25125,     0.25225,     0.25325,     0.25425,     0.25526,     0.25626,     0.25726,     0.25826,     0.25926,     0.26026,     0.26126,     0.26226,     0.26326,
           0.26426,     0.26527,     0.26627,     0.26727,     0.26827,     0.26927,     0.27027,     0.27127,     0.27227,     0.27327,     0.27427,     0.27528,     0.27628,     0.27728,     0.27828,     0.27928,     0.28028,     0.28128,     0.28228,     0.28328,     0.28428,     0.28529,     0.28629,     0.28729,
           0.28829,     0.28929,     0.29029,     0.29129,     0.29229,     0.29329,     0.29429,      0.2953,      0.2963,      0.2973,      0.2983,      0.2993,      0.3003,      0.3013,      0.3023,      0.3033,      0.3043,     0.30531,     0.30631,     0.30731,     0.30831,     0.30931,     0.31031,     0.31131,
           0.31231,     0.31331,     0.31431,     0.31532,     0.31632,     0.31732,     0.31832,     0.31932,     0.32032,     0.32132,     0.32232,     0.32332,     0.32432,     0.32533,     0.32633,     0.32733,     0.32833,     0.32933,     0.33033,     0.33133,     0.33233,     0.33333,     0.33433,     0.33534,
           0.33634,     0.33734,     0.33834,     0.33934,     0.34034,     0.34134,     0.34234,     0.34334,     0.34434,     0.34535,     0.34635,     0.34735,     0.34835,     0.34935,     0.35035,     0.35135,     0.35235,     0.35335,     0.35435,     0.35536,     0.35636,     0.35736,     0.35836,     0.35936,
           0.36036,     0.36136,     0.36236,     0.36336,     0.36436,     0.36537,     0.36637,     0.36737,     0.36837,     0.36937,     0.37037,     0.37137,     0.37237,     0.37337,     0.37437,     0.37538,     0.37638,     0.37738,     0.37838,     0.37938,     0.38038,     0.38138,     0.38238,     0.38338,
           0.38438,     0.38539,     0.38639,     0.38739,     0.38839,     0.38939,     0.39039,     0.39139,     0.39239,     0.39339,     0.39439,      0.3954,      0.3964,      0.3974,      0.3984,      0.3994,      0.4004,      0.4014,      0.4024,      0.4034,      0.4044,     0.40541,     0.40641,     0.40741,
           0.40841,     0.40941,     0.41041,     0.41141,     0.41241,     0.41341,     0.41441,     0.41542,     0.41642,     0.41742,     0.41842,     0.41942,     0.42042,     0.42142,     0.42242,     0.42342,     0.42442,     0.42543,     0.42643,     0.42743,     0.42843,     0.42943,     0.43043,     0.43143,
           0.43243,     0.43343,     0.43443,     0.43544,     0.43644,     0.43744,     0.43844,     0.43944,     0.44044,     0.44144,     0.44244,     0.44344,     0.44444,     0.44545,     0.44645,     0.44745,     0.44845,     0.44945,     0.45045,     0.45145,     0.45245,     0.45345,     0.45445,     0.45546,
           0.45646,     0.45746,     0.45846,     0.45946,     0.46046,     0.46146,     0.46246,     0.46346,     0.46446,     0.46547,     0.46647,     0.46747,     0.46847,     0.46947,     0.47047,     0.47147,     0.47247,     0.47347,     0.47447,     0.47548,     0.47648,     0.47748,     0.47848,     0.47948,
           0.48048,     0.48148,     0.48248,     0.48348,     0.48448,     0.48549,     0.48649,     0.48749,     0.48849,     0.48949,     0.49049,     0.49149,     0.49249,     0.49349,     0.49449,      0.4955,      0.4965,      0.4975,      0.4985,      0.4995,      0.5005,      0.5015,      0.5025,      0.5035,
            0.5045,     0.50551,     0.50651,     0.50751,     0.50851,     0.50951,     0.51051,     0.51151,     0.51251,     0.51351,     0.51451,     0.51552,     0.51652,     0.51752,     0.51852,     0.51952,     0.52052,     0.52152,     0.52252,     0.52352,     0.52452,     0.52553,     0.52653,     0.52753,
           0.52853,     0.52953,     0.53053,     0.53153,     0.53253,     0.53353,     0.53453,     0.53554,     0.53654,     0.53754,     0.53854,     0.53954,     0.54054,     0.54154,     0.54254,     0.54354,     0.54454,     0.54555,     0.54655,     0.54755,     0.54855,     0.54955,     0.55055,     0.55155,
           0.55255,     0.55355,     0.55455,     0.55556,     0.55656,     0.55756,     0.55856,     0.55956,     0.56056,     0.56156,     0.56256,     0.56356,     0.56456,     0.56557,     0.56657,     0.56757,     0.56857,     0.56957,     0.57057,     0.57157,     0.57257,     0.57357,     0.57457,     0.57558,
           0.57658,     0.57758,     0.57858,     0.57958,     0.58058,     0.58158,     0.58258,     0.58358,     0.58458,     0.58559,     0.58659,     0.58759,     0.58859,     0.58959,     0.59059,     0.59159,     0.59259,     0.59359,     0.59459,      0.5956,      0.5966,      0.5976,      0.5986,      0.5996,
            0.6006,      0.6016,      0.6026,      0.6036,      0.6046,     0.60561,     0.60661,     0.60761,     0.60861,     0.60961,     0.61061,     0.61161,     0.61261,     0.61361,     0.61461,     0.61562,     0.61662,     0.61762,     0.61862,     0.61962,     0.62062,     0.62162,     0.62262,     0.62362,
           0.62462,     0.62563,     0.62663,     0.62763,     0.62863,     0.62963,     0.63063,     0.63163,     0.63263,     0.63363,     0.63463,     0.63564,     0.63664,     0.63764,     0.63864,     0.63964,     0.64064,     0.64164,     0.64264,     0.64364,     0.64464,     0.64565,     0.64665,     0.64765,
           0.64865,     0.64965,     0.65065,     0.65165,     0.65265,     0.65365,     0.65465,     0.65566,     0.65666,     0.65766,     0.65866,     0.65966,     0.66066,     0.66166,     0.66266,     0.66366,     0.66466,     0.66567,     0.66667,     0.66767,     0.66867,     0.66967,     0.67067,     0.67167,
           0.67267,     0.67367,     0.67467,     0.67568,     0.67668,     0.67768,     0.67868,     0.67968,     0.68068,     0.68168,     0.68268,     0.68368,     0.68468,     0.68569,     0.68669,     0.68769,     0.68869,     0.68969,     0.69069,     0.69169,     0.69269,     0.69369,     0.69469,      0.6957,
            0.6967,      0.6977,      0.6987,      0.6997,      0.7007,      0.7017,      0.7027,      0.7037,      0.7047,     0.70571,     0.70671,     0.70771,     0.70871,     0.70971,     0.71071,     0.71171,     0.71271,     0.71371,     0.71471,     0.71572,     0.71672,     0.71772,     0.71872,     0.71972,
           0.72072,     0.72172,     0.72272,     0.72372,     0.72472,     0.72573,     0.72673,     0.72773,     0.72873,     0.72973,     0.73073,     0.73173,     0.73273,     0.73373,     0.73473,     0.73574,     0.73674,     0.73774,     0.73874,     0.73974,     0.74074,     0.74174,     0.74274,     0.74374,
           0.74474,     0.74575,     0.74675,     0.74775,     0.74875,     0.74975,     0.75075,     0.75175,     0.75275,     0.75375,     0.75475,     0.75576,     0.75676,     0.75776,     0.75876,     0.75976,     0.76076,     0.76176,     0.76276,     0.76376,     0.76476,     0.76577,     0.76677,     0.76777,
           0.76877,     0.76977,     0.77077,     0.77177,     0.77277,     0.77377,     0.77477,     0.77578,     0.77678,     0.77778,     0.77878,     0.77978,     0.78078,     0.78178,     0.78278,     0.78378,     0.78478,     0.78579,     0.78679,     0.78779,     0.78879,     0.78979,     0.79079,     0.79179,
           0.79279,     0.79379,     0.79479,      0.7958,      0.7968,      0.7978,      0.7988,      0.7998,      0.8008,      0.8018,      0.8028,      0.8038,      0.8048,     0.80581,     0.80681,     0.80781,     0.80881,     0.80981,     0.81081,     0.81181,     0.81281,     0.81381,     0.81481,     0.81582,
           0.81682,     0.81782,     0.81882,     0.81982,     0.82082,     0.82182,     0.82282,     0.82382,     0.82482,     0.82583,     0.82683,     0.82783,     0.82883,     0.82983,     0.83083,     0.83183,     0.83283,     0.83383,     0.83483,     0.83584,     0.83684,     0.83784,     0.83884,     0.83984,
           0.84084,     0.84184,     0.84284,     0.84384,     0.84484,     0.84585,     0.84685,     0.84785,     0.84885,     0.84985,     0.85085,     0.85185,     0.85285,     0.85385,     0.85485,     0.85586,     0.85686,     0.85786,     0.85886,     0.85986,     0.86086,     0.86186,     0.86286,     0.86386,
           0.86486,     0.86587,     0.86687,     0.86787,     0.86887,     0.86987,     0.87087,     0.87187,     0.87287,     0.87387,     0.87487,     0.87588,     0.87688,     0.87788,     0.87888,     0.87988,     0.88088,     0.88188,     0.88288,     0.88388,     0.88488,     0.88589,     0.88689,     0.88789,
           0.88889,     0.88989,     0.89089,     0.89189,     0.89289,     0.89389,     0.89489,      0.8959,      0.8969,      0.8979,      0.8989,      0.8999,      0.9009,      0.9019,      0.9029,      0.9039,      0.9049,     0.90591,     0.90691,     0.90791,     0.90891,     0.90991,     0.91091,     0.91191,
           0.91291,     0.91391,     0.91491,     0.91592,     0.91692,     0.91792,     0.91892,     0.91992,     0.92092,     0.92192,     0.92292,     0.92392,     0.92492,     0.92593,     0.92693,     0.92793,     0.92893,     0.92993,     0.93093,     0.93193,     0.93293,     0.93393,     0.93493,     0.93594,
           0.93694,     0.93794,     0.93894,     0.93994,     0.94094,     0.94194,     0.94294,     0.94394,     0.94494,     0.94595,     0.94695,     0.94795,     0.94895,     0.94995,     0.95095,     0.95195,     0.95295,     0.95395,     0.95495,     0.95596,     0.95696,     0.95796,     0.95896,     0.95996,
           0.96096,     0.96196,     0.96296,     0.96396,     0.96496,     0.96597,     0.96697,     0.96797,     0.96897,     0.96997,     0.97097,     0.97197,     0.97297,     0.97397,     0.97497,     0.97598,     0.97698,     0.97798,     0.97898,     0.97998,     0.98098,     0.98198,     0.98298,     0.98398,
           0.98498,     0.98599,     0.98699,     0.98799,     0.98899,     0.98999,     0.99099,     0.99199,     0.99299,     0.99399,     0.99499,       0.996,       0.997,       0.998,       0.999,           1]), array([[    0.95669,     0.95669,     0.95276, ...,           0,           0,           0],
       [    0.83333,     0.83333,     0.83333, ...,           0,           0,           0],
       [    0.82609,     0.82609,     0.82609, ...,           0,           0,           0],
       ...,
       [          1,           1,           1, ...,           0,           0,           0],
       [          1,           1,           1, ...,           0,           0,           0],
       [          1,           1,           1, ...,           0,           0,           0]]), 'Confidence', 'Recall']]
fitness: np.float64(0.8255287544201578)
keys: ['metrics/precision(B)', 'metrics/recall(B)', 'metrics/mAP50(B)', 'metrics/mAP50-95(B)']
maps: array([    0.75412,     0.63903,     0.41296,       0.995,     0.96812,     0.77275,      0.9432,     0.67183,     0.66633,     0.32049,     0.81284,     0.82439,     0.81284,     0.92504,     0.83701,       0.995,     0.95141,      0.8955,     0.81284,     0.81284,     0.92269,       0.995,       0.995,     0.98101,
           0.66656,     0.86947,      0.6929,     0.78625,       0.902,     0.70888,      0.8955,     0.77263,     0.40951,     0.66091,     0.42614,     0.41974,     0.83395,     0.81284,     0.62183,      0.6069,     0.63543,     0.78458,     0.76411,     0.69596,     0.68787,     0.81765,       0.995,     0.81284,
             0.995,      0.8744,     0.74271,     0.81827,       0.995,       0.945,     0.96893,       0.995,     0.82049,     0.94289,     0.83942,       0.995,     0.84753,     0.95775,       0.995,      0.9648,     0.65347,      0.7156,     0.81284,     0.76396,       0.995,     0.88804,     0.81284,     0.78227,
            0.9159,     0.62052,     0.89885,     0.94712,      0.8955,      0.8924,     0.81284,     0.92639])
names: {0: 'person', 1: 'bicycle', 2: 'car', 3: 'motorcycle', 4: 'airplane', 5: 'bus', 6: 'train', 7: 'truck', 8: 'boat', 9: 'traffic light', 10: 'fire hydrant', 11: 'stop sign', 12: 'parking meter', 13: 'bench', 14: 'bird', 15: 'cat', 16: 'dog', 17: 'horse', 18: 'sheep', 19: 'cow', 20: 'elephant', 21: 'bear', 22: 'zebra', 23: 'giraffe', 24: 'backpack', 25: 'umbrella', 26: 'handbag', 27: 'tie', 28: 'suitcase', 29: 'frisbee', 30: 'skis', 31: 'snowboard', 32: 'sports ball', 33: 'kite', 34: 'baseball bat', 35: 'baseball glove', 36: 'skateboard', 37: 'surfboard', 38: 'tennis racket', 39: 'bottle', 40: 'wine glass', 41: 'cup', 42: 'fork', 43: 'knife', 44: 'spoon', 45: 'bowl', 46: 'banana', 47: 'apple', 48: 'sandwich', 49: 'orange', 50: 'broccoli', 51: 'carrot', 52: 'hot dog', 53: 'pizza', 54: 'donut', 55: 'cake', 56: 'chair', 57: 'couch', 58: 'potted plant', 59: 'bed', 60: 'dining table', 61: 'toilet', 62: 'tv', 63: 'laptop', 64: 'mouse', 65: 'remote', 66: 'keyboard', 67: 'cell phone', 68: 'microwave', 69: 'oven', 70: 'toaster', 71: 'sink', 72: 'refrigerator', 73: 'book', 74: 'clock', 75: 'vase', 76: 'scissors', 77: 'teddy bear', 78: 'hair drier', 79: 'toothbrush'}
nt_per_class: array([254,   6,  46,   5,   6,   7,   3,  12,   6,  14,   0,   2,   0,   9,  16,   4,   9,   2,   0,   0,  17,   1,   4,   9,   6,  18,  19,   7,   4,   5,   1,   7,   6,  10,   4,   7,   5,   0,   7,  18,  16,  36,   6,  16,  22,  28,   1,   0,   2,   4,  11,  24,   2,   5,  14,   4,  35,   6,  14,   3,  13,   2,
         2,   3,   2,   8,   0,   8,   3,   5,   0,   6,   5,  29,   9,   2,   1,  21,   0,   5])
nt_per_image: array([61,  3, 12,  4,  5,  5,  3,  5,  2,  4,  0,  2,  0,  5,  2,  4,  9,  1,  0,  0,  4,  1,  2,  4,  4,  4,  9,  6,  2,  5,  1,  2,  6,  2,  4,  4,  3,  0,  5,  6,  5, 10,  6,  7,  5,  9,  1,  0,  2,  1,  4,  3,  1,  5,  2,  4,  9,  5,  9,  3, 10,  2,  2,  2,  2,  5,  0,  5,  3,  5,  0,  4,  5,  6,  8,  2,  1,  6,
        0,  2])
results_dict: {'metrics/precision(B)': np.float64(0.9490510981853786), 'metrics/recall(B)': np.float64(0.8897383245004401), 'metrics/mAP50(B)': np.float64(0.9396973902877623), 'metrics/mAP50-95(B)': np.float64(0.8128433504348684), 'fitness': np.float64(0.8255287544201578)}
save_dir: Path('runs/detect/val10')
speed: {'preprocess': 0.11946710401389282, 'inference': 5.948787069428363, 'loss': 0.0031694062272435986, 'postprocess': 0.4260614005033858}
stats: {'tp': [], 'conf': [], 'pred_cls': [], 'target_cls': [], 'target_img': []}
task: 'detect'