Releases: roboflow/supervision
supervision-0.17.1
π Added
- Support for Python 3.12.
π Contributors
@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski)
supervision-0.17.0
π Added
sv.PixelateAnnotator
allowing to pixelate objects on images and videos. (#633)
walking-pixelate-corner-optimized.mp4
-
sv.TriangleAnnotator
allowing to annotate images and videos with triangle markers. (#652) -
sv.PolygonAnnotator
allowing to annotate images and videos with segmentation mask outline. (#602)>>> import supervision as sv >>> image = ... >>> detections = sv.Detections(...) >>> polygon_annotator = sv.PolygonAnnotator() >>> annotated_frame = polygon_annotator.annotate( ... scene=image.copy(), ... detections=detections ... )
walking-polygon-optimized.mp4
-
sv.assets
allowing download of video files that you can use in your demos. (#476)>>> from supervision.assets import download_assets, VideoAssets >>> download_assets(VideoAssets.VEHICLES) "vehicles.mp4"
-
Position.CENTER_OF_MASS
allowing to place labels in center of mass of segmentation masks. (#605) -
sv.scale_boxes
allowing to scalesv.Detections.xyxy
values. (#651) -
sv.calculate_dynamic_text_scale
andsv.calculate_dynamic_line_thickness
allowing text scale and line thickness to match image resolution. (#637) -
sv.Color.as_hex
allowing to extract color value in HEX format. (#620) -
sv.Classifications.from_timm
allowing to load classification result from timm models. (#572) -
sv.Classifications.from_clip
allowing to load classification result from clip model. (#478) -
sv.Detections.from_azure_analyze_image
allowing to load detection results from Azure Image Analysis. (#571)
π± Changed
-
sv.BoxMaskAnnotator
renaming it tosv.ColorAnnotator
. (#646) -
sv.MaskAnnotator
to make it 5x faster. (#606)
π οΈ Fixed
-
sv.DetectionDataset.from_yolo
to ignore empty lines in annotation files. (#584) -
sv.BlurAnnotator
to trim negative coordinates before bluring detections. (#555) -
sv.TraceAnnotator
to respect trace position. (#511)
π Contributors
@onuralpszr (Onuralp SEZER), @hugoles (Hugo Dutra), @karanjakhar (Karan Jakhar), @kim-jeonghyun (Jeonghyun Kim), @fdloopes (
Felipe Lopes), @abhishek7kalra (Abhishek Kalra), @SummitStudiosDev, @xenteros @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.16.0
π Added
supervision-0.16.0-annotators.mp4
sv.BoxMaskAnnotator
allowing to annotate images and videos with mox masks. (#422)sv.HaloAnnotator
allowing to annotate images and videos with halo effect. (#433)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
sv.HeatMapAnnotator
allowing to annotate videos with heat maps. (#466)sv.DotAnnotator
allowing to annotate images and videos with dots. (#492)sv.draw_image
allowing to draw an image onto a given scene with specified opacity and dimensions. (#449)sv.FPSMonitor
for monitoring frames per second (FPS) to benchmark latency. (#280)- π€ Hugging Face Annotators space. (#454)
π± Changed
sv.LineZone.trigger
now returnTuple[np.ndarray, np.ndarray]
. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)- Annotator argument name from
color_map: str
tocolor_lookup: ColorLookup
enum to increase type safety. (#465) sv.MaskAnnotator
allowing 2x faster annotation. (#426)
π οΈ Fixed
- Poetry env definition allowing proper local installation. (#477)
sv.ByteTrack
to returnnp.array([], dtype=int)
whensvDetections
is empty. (#430)- YOLONAS detection missing predication part added & fixed (#416)
- SAM detection at Demo Notebook
MaskAnnotator(color_map="index")
color_map
set toindex
(#416)
ποΈ Deleted
Warning
Deletedsv.Detections.from_yolov8
andsv.Classifications.from_yolov8
as those are now replaced bysv.Detections.from_ultralytics
andsv.Classifications.from_ultralytics
. (#438)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.15.0
π Added
supervision-0.15.0.mp4
-
sv.LabelAnnotator
allowing to annotate images and videos with text. (#170) -
sv.BoundingBoxAnnotator
allowing to annotate images and videos with bounding boxes. (#170) -
sv.BoxCornerAnnotator
allowing to annotate images and videos with just bounding box corners. (#170) -
sv.MaskAnnotator
allowing to annotate images and videos with segmentation masks. (#170) -
sv.EllipseAnnotator
allowing to annotate images and videos with ellipses (sports game style). (#170) -
sv.CircleAnnotator
allowing to annotate images and videos with circles. (#386) -
sv.TraceAnnotator
allowing to draw path of moving objects on videos. (#354) -
sv.BlurAnnotator
allowing to blur objects on images and videos. (#405)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
- Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
traffic_analysis_result.mov
π± Changed
-
sv.Detections.from_roboflow
now does not requireclass_list
to be specified. Theclass_id
value can be extracted directly from the inference response. (#399) -
sv.VideoSink
now allows to customize the output codec. (#381) -
sv.InferenceSlicer
can now operate in multithreading mode. (#361)
π οΈ Fixed
sv.Detections.from_deepsparse
to allow processing empty deepsparse result object. (#348)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra (Rajarshi Misra), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.14.0
π Added
- Support for SAHI inference technique with
sv.InferenceSlicer
. (#282)
>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
inference-slicer.mov
-
Detections.from_deepsparse
to enable seamless integration with DeepSparse framework. (#297) -
sv.Classifications.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports. (#281)Warning
sv.Detections.from_yolov8
andsv.Classifications.from_yolov8
are now deprecated and will be removed withsupervision-0.16.0
release. -
First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision. (#341)
detect-and-track-objects-on-video.mov
π± Changed
sv.ClassificationDataset
andsv.DetectionDataset
now use image path (not image name) as dataset keys. (#296)
π οΈ Fixed
Detections.from_roboflow
to filter out polygons with less than 3 points. (#300)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.13.0
π Added
- Support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision
. (#236)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
- Support for
ByteTrack
for object tracking withsv.ByteTrack
. (#256)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()
>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
... results = model(frame)[0]
... detections = sv.Detections.from_yolov8(results)
... detections = byte_tracker.update_from_detections(detections=detections)
... labels = [
... f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
... for _, _, confidence, class_id, tracker_id
... in detections
... ]
... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)
>>> sv.process_video(
... source_path='...',
... target_path='...',
... callback=callback
... )
byte_track_result_small.mp4
-
sv.Detections.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to usesupervision
with all models that Ultralytics supports. (#222)Warning
sv.Detections.from_yolov8
is now deprecated and will be removed withsupervision-0.15.0
release. -
sv.Detections.from_paddledet
to enable seamless integration with PaddleDetection framework. (#191) -
Support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.
. (#245)
π Contributors
@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. GarcΓa-OcaΓ±a), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.12.0
Warning
With thesupervision-0.12.0
release, we are terminating official support for Python 3.7. (#179)
π Added
- Initial support for object detection model benchmarking with
sv.ConfusionMatrix
. (#177)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Detections.from_mmdetection
to enable seamless integration with MMDetection framework. (#173) -
Ability to install package in
headless
ordesktop
mode. (#130)
π± Changed
- Packing method from
setup.py
topyproject.toml
. (#180)
π οΈ Fixed
sv.DetectionDataset.from_cooc
can't be loaded when there are images without annotations. (#188)sv.DetectionDataset.from_yolo
can't load background instances. (#226)
π Contributors
@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg
supervision-0.11.1
π οΈ Fixed
as_folder_structure
fails to savesv.ClassificationDataset
when it is result of inference. (#165)
π Contributors
supervision-0.11.0
π Added
- Ability to load and save
sv.DetectionDataset
in COCO format usingas_coco
andfrom_coco
methods. (#150)
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Ability to marge multiple
sv.DetectionDataset
together usingmerge
method. (#158)
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
- Additional
start
andend
arguments tosv.get_video_frames_generator
allowing to generate frames only for a selected part of the video. (#162)
π οΈ Fixed
- Incorrect loading of YOLO dataset class names from
data.yaml
. (#157)
π Contributors
supervision-0.10.0
π Added
- Ability to load and save
sv.ClassificationDataset
in a folder structure format. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
- Support for
sv.ClassificationDataset.split
allowing to dividesv.ClassificationDataset
into two parts. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset(...)
>>> train_cs, test_cs = cs.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_cs), len(test_cs)
(700, 300)
-
Ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow
. (#110) -
Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
π± Changed
sv.get_video_frames_generator
documentation to better describe actual behavior. (#135)