Skip to content

Drawing#

Collection of drawing functions

draw_points #

draw_points(frame, drawables=None, radius=None, thickness=None, color='by_id', color_by_label=None, draw_labels=True, text_size=None, draw_ids=True, draw_points=True, text_thickness=None, text_color=None, hide_dead_points=True, detections=None, label_size=None, draw_scores=False) #

Draw the points included in a list of Detections or TrackedObjects.

Parameters:

Name Type Description Default
frame ndarray

The OpenCV frame to draw on. Modified in place.

required
drawables Union[Sequence[Detection], Sequence[TrackedObject]]

List of objects to draw, Detections and TrackedObjects are accepted.

None
radius Optional[int]

Radius of the circles representing each point. By default a sensible value is picked considering the frame size.

None
thickness Optional[int]

Thickness or width of the line.

None
color ColorLike

This parameter can take:

  1. A color as a tuple of ints describing the BGR (0, 0, 255)
  2. A 6-digit hex string "#FF0000"
  3. One of the defined color names "red"
  4. A string defining the strategy to choose colors from the Palette:

    1. based on the id of the objects "by_id"
    2. based on the label of the objects "by_label"
    3. random choice "random"

If using by_id or by_label strategy but your objects don't have that field defined (Detections never have ids) the selected color will be the same for all objects (Palette's default Color).

'by_id'
color_by_label bool

Deprecated. set color="by_label".

None
draw_labels bool

If set to True, the label is added to a title that is drawn on top of the box. If an object doesn't have a label this parameter is ignored.

True
draw_scores bool

If set to True, the score is added to a title that is drawn on top of the box. If an object doesn't have a label this parameter is ignored.

False
text_size Optional[int]

Size of the title, the value is used as a multiplier of the base size of the font. By default the size is scaled automatically based on the frame size.

None
draw_ids bool

If set to True, the id is added to a title that is drawn on top of the box. If an object doesn't have an id this parameter is ignored.

True
draw_points bool

Set to False to hide the points and just draw the text.

True
text_thickness Optional[int]

Thickness of the font. By default it's scaled with the text_size.

None
text_color Optional[ColorLike]

Color of the text. By default the same color as the box is used.

None
hide_dead_points bool

Set this param to False to always draw all points, even the ones considered "dead". A point is "dead" when the corresponding value of TrackedObject.live_points is set to False. If all objects are dead the object is not drawn. All points of a detection are considered to be alive.

True
detections Sequence[Detection]

Deprecated. use drawables.

None
label_size Optional[int]

Deprecated. text_size.

None

Returns:

Type Description
ndarray

The resulting frame.

Source code in norfair/drawing/draw_points.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
def draw_points(
    frame: np.ndarray,
    drawables: Union[Sequence[Detection], Sequence[TrackedObject]] = None,
    radius: Optional[int] = None,
    thickness: Optional[int] = None,
    color: ColorLike = "by_id",
    color_by_label: bool = None,  # deprecated
    draw_labels: bool = True,
    text_size: Optional[int] = None,
    draw_ids: bool = True,
    draw_points: bool = True,  # pylint: disable=redefined-outer-name
    text_thickness: Optional[int] = None,
    text_color: Optional[ColorLike] = None,
    hide_dead_points: bool = True,
    detections: Sequence["Detection"] = None,  # deprecated
    label_size: Optional[int] = None,  # deprecated
    draw_scores: bool = False,
) -> np.ndarray:
    """
    Draw the points included in a list of Detections or TrackedObjects.

    Parameters
    ----------
    frame : np.ndarray
        The OpenCV frame to draw on. Modified in place.
    drawables : Union[Sequence[Detection], Sequence[TrackedObject]], optional
        List of objects to draw, Detections and TrackedObjects are accepted.
    radius : Optional[int], optional
        Radius of the circles representing each point.
        By default a sensible value is picked considering the frame size.
    thickness : Optional[int], optional
        Thickness or width of the line.
    color : ColorLike, optional
        This parameter can take:

        1. A color as a tuple of ints describing the BGR `(0, 0, 255)`
        2. A 6-digit hex string `"#FF0000"`
        3. One of the defined color names `"red"`
        4. A string defining the strategy to choose colors from the Palette:

            1. based on the id of the objects `"by_id"`
            2. based on the label of the objects `"by_label"`
            3. random choice `"random"`

        If using `by_id` or `by_label` strategy but your objects don't
        have that field defined (Detections never have ids) the
        selected color will be the same for all objects (Palette's default Color).
    color_by_label : bool, optional
        **Deprecated**. set `color="by_label"`.
    draw_labels : bool, optional
        If set to True, the label is added to a title that is drawn on top of the box.
        If an object doesn't have a label this parameter is ignored.
    draw_scores : bool, optional
        If set to True, the score is added to a title that is drawn on top of the box.
        If an object doesn't have a label this parameter is ignored.
    text_size : Optional[int], optional
        Size of the title, the value is used as a multiplier of the base size of the font.
        By default the size is scaled automatically based on the frame size.
    draw_ids : bool, optional
        If set to True, the id is added to a title that is drawn on top of the box.
        If an object doesn't have an id this parameter is ignored.
    draw_points : bool, optional
        Set to False to hide the points and just draw the text.
    text_thickness : Optional[int], optional
        Thickness of the font. By default it's scaled with the `text_size`.
    text_color : Optional[ColorLike], optional
        Color of the text. By default the same color as the box is used.
    hide_dead_points : bool, optional
        Set this param to False to always draw all points, even the ones considered "dead".
        A point is "dead" when the corresponding value of `TrackedObject.live_points`
        is set to False. If all objects are dead the object is not drawn.
        All points of a detection are considered to be alive.
    detections : Sequence[Detection], optional
        **Deprecated**. use drawables.
    label_size : Optional[int], optional
        **Deprecated**. text_size.

    Returns
    -------
    np.ndarray
        The resulting frame.
    """
    #
    # handle deprecated parameters
    #
    if color_by_label is not None:
        warn_once(
            'Parameter "color_by_label" on function draw_points is deprecated, set `color="by_label"` instead'
        )
        color = "by_label"
    if detections is not None:
        warn_once(
            "Parameter 'detections' on function draw_points is deprecated, use 'drawables' instead"
        )
        drawables = detections
    if label_size is not None:
        warn_once(
            "Parameter 'label_size' on function draw_points is deprecated, use 'text_size' instead"
        )
        text_size = label_size
    # end

    if drawables is None:
        return

    if text_color is not None:
        text_color = parse_color(text_color)

    if color is None:
        color = "by_id"
    if thickness is None:
        thickness = -1
    if radius is None:
        radius = int(round(max(max(frame.shape) * 0.002, 1)))

    for o in drawables:
        if not isinstance(o, Drawable):
            d = Drawable(o)
        else:
            d = o

        if hide_dead_points and not d.live_points.any():
            continue

        if color == "by_id":
            obj_color = Palette.choose_color(d.id)
        elif color == "by_label":
            obj_color = Palette.choose_color(d.label)
        elif color == "random":
            obj_color = Palette.choose_color(np.random.rand())
        else:
            obj_color = parse_color(color)

        if text_color is None:
            obj_text_color = obj_color
        else:
            obj_text_color = text_color

        if draw_points:
            for point, live in zip(d.points, d.live_points):
                if live or not hide_dead_points:
                    Drawer.circle(
                        frame,
                        tuple(point.astype(int)),
                        radius=radius,
                        color=obj_color,
                        thickness=thickness,
                    )

        if draw_labels or draw_ids or draw_scores:
            position = d.points[d.live_points].mean(axis=0)
            position -= radius
            text = _build_text(
                d, draw_labels=draw_labels, draw_ids=draw_ids, draw_scores=draw_scores
            )

            Drawer.text(
                frame,
                text,
                tuple(position.astype(int)),
                size=text_size,
                color=obj_text_color,
                thickness=text_thickness,
            )

    return frame

draw_tracked_objects(frame, objects, radius=None, color=None, id_size=None, id_thickness=None, draw_points=True, color_by_label=False, draw_labels=False, label_size=None) #

Deprecated use draw_points

Source code in norfair/drawing/draw_points.py
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
def draw_tracked_objects(
    frame: np.ndarray,
    objects: Sequence["TrackedObject"],
    radius: Optional[int] = None,
    color: Optional[ColorLike] = None,
    id_size: Optional[float] = None,
    id_thickness: Optional[int] = None,
    draw_points: bool = True,  # pylint: disable=redefined-outer-name
    color_by_label: bool = False,
    draw_labels: bool = False,
    label_size: Optional[int] = None,
):
    """
    **Deprecated** use [`draw_points`][norfair.drawing.draw_points.draw_points]
    """
    warn_once("draw_tracked_objects is deprecated, use draw_points instead")

    frame_scale = frame.shape[0] / 100
    if radius is None:
        radius = int(frame_scale * 0.5)
    if id_size is None:
        id_size = frame_scale / 10
    if id_thickness is None:
        id_thickness = int(frame_scale / 5)
    if label_size is None:
        label_size = int(max(frame_scale / 100, 1))

    _draw_points_alias(
        frame=frame,
        drawables=objects,
        color="by_label" if color_by_label else color,
        radius=radius,
        thickness=None,
        draw_labels=draw_labels,
        draw_ids=id_size is not None and id_size > 0,
        draw_points=draw_points,
        text_size=label_size or id_size,
        text_thickness=id_thickness,
        text_color=None,
        hide_dead_points=True,
    )

draw_boxes #

draw_boxes(frame, drawables=None, color='by_id', thickness=None, random_color=None, color_by_label=None, draw_labels=False, text_size=None, draw_ids=True, text_color=None, text_thickness=None, draw_box=True, detections=None, line_color=None, line_width=None, label_size=None, draw_scores=False) #

Draw bounding boxes corresponding to Detections or TrackedObjects.

Parameters:

Name Type Description Default
frame ndarray

The OpenCV frame to draw on. Modified in place.

required
drawables Union[Sequence[Detection], Sequence[TrackedObject]]

List of objects to draw, Detections and TrackedObjects are accepted. This objects are assumed to contain 2 bi-dimensional points defining the bounding box as [[x0, y0], [x1, y1]].

None
color ColorLike

This parameter can take:

  1. A color as a tuple of ints describing the BGR (0, 0, 255)
  2. A 6-digit hex string "#FF0000"
  3. One of the defined color names "red"
  4. A string defining the strategy to choose colors from the Palette:

    1. based on the id of the objects "by_id"
    2. based on the label of the objects "by_label"
    3. random choice "random"

If using by_id or by_label strategy but your objects don't have that field defined (Detections never have ids) the selected color will be the same for all objects (Palette's default Color).

'by_id'
thickness Optional[int]

Thickness or width of the line.

None
random_color bool

Deprecated. Set color="random".

None
color_by_label bool

Deprecated. Set color="by_label".

None
draw_labels bool

If set to True, the label is added to a title that is drawn on top of the box. If an object doesn't have a label this parameter is ignored.

False
draw_scores bool

If set to True, the score is added to a title that is drawn on top of the box. If an object doesn't have a label this parameter is ignored.

False
text_size Optional[float]

Size of the title, the value is used as a multiplier of the base size of the font. By default the size is scaled automatically based on the frame size.

None
draw_ids bool

If set to True, the id is added to a title that is drawn on top of the box. If an object doesn't have an id this parameter is ignored.

True
text_color Optional[ColorLike]

Color of the text. By default the same color as the box is used.

None
text_thickness Optional[int]

Thickness of the font. By default it's scaled with the text_size.

None
draw_box bool

Set to False to hide the box and just draw the text.

True
detections Sequence[Detection]

Deprecated. Use drawables.

None
line_color Optional[ColorLike]

Deprecated. Use color.

None
line_width Optional[int]

Deprecated. Use thickness.

None
label_size Optional[int]

Deprecated. Use text_size.

None

Returns:

Type Description
ndarray

The resulting frame.

Source code in norfair/drawing/draw_boxes.py
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
def draw_boxes(
    frame: np.ndarray,
    drawables: Union[Sequence[Detection], Sequence[TrackedObject]] = None,
    color: ColorLike = "by_id",
    thickness: Optional[int] = None,
    random_color: bool = None,  # Deprecated
    color_by_label: bool = None,  # Deprecated
    draw_labels: bool = False,
    text_size: Optional[float] = None,
    draw_ids: bool = True,
    text_color: Optional[ColorLike] = None,
    text_thickness: Optional[int] = None,
    draw_box: bool = True,
    detections: Sequence["Detection"] = None,  # Deprecated
    line_color: Optional[ColorLike] = None,  # Deprecated
    line_width: Optional[int] = None,  # Deprecated
    label_size: Optional[int] = None,  # Deprecated´
    draw_scores: bool = False,
) -> np.ndarray:
    """
    Draw bounding boxes corresponding to Detections or TrackedObjects.

    Parameters
    ----------
    frame : np.ndarray
        The OpenCV frame to draw on. Modified in place.
    drawables : Union[Sequence[Detection], Sequence[TrackedObject]], optional
        List of objects to draw, Detections and TrackedObjects are accepted.
        This objects are assumed to contain 2 bi-dimensional points defining
        the bounding box as `[[x0, y0], [x1, y1]]`.
    color : ColorLike, optional
        This parameter can take:

        1. A color as a tuple of ints describing the BGR `(0, 0, 255)`
        2. A 6-digit hex string `"#FF0000"`
        3. One of the defined color names `"red"`
        4. A string defining the strategy to choose colors from the Palette:

            1. based on the id of the objects `"by_id"`
            2. based on the label of the objects `"by_label"`
            3. random choice `"random"`

        If using `by_id` or `by_label` strategy but your objects don't
        have that field defined (Detections never have ids) the
        selected color will be the same for all objects (Palette's default Color).
    thickness : Optional[int], optional
        Thickness or width of the line.
    random_color : bool, optional
        **Deprecated**. Set color="random".
    color_by_label : bool, optional
        **Deprecated**. Set color="by_label".
    draw_labels : bool, optional
        If set to True, the label is added to a title that is drawn on top of the box.
        If an object doesn't have a label this parameter is ignored.
    draw_scores : bool, optional
        If set to True, the score is added to a title that is drawn on top of the box.
        If an object doesn't have a label this parameter is ignored.
    text_size : Optional[float], optional
        Size of the title, the value is used as a multiplier of the base size of the font.
        By default the size is scaled automatically based on the frame size.
    draw_ids : bool, optional
        If set to True, the id is added to a title that is drawn on top of the box.
        If an object doesn't have an id this parameter is ignored.
    text_color : Optional[ColorLike], optional
        Color of the text. By default the same color as the box is used.
    text_thickness : Optional[int], optional
        Thickness of the font. By default it's scaled with the `text_size`.
    draw_box : bool, optional
        Set to False to hide the box and just draw the text.
    detections : Sequence[Detection], optional
        **Deprecated**. Use drawables.
    line_color: Optional[ColorLike], optional
        **Deprecated**. Use color.
    line_width: Optional[int], optional
        **Deprecated**. Use thickness.
    label_size: Optional[int], optional
        **Deprecated**. Use text_size.

    Returns
    -------
    np.ndarray
        The resulting frame.
    """
    #
    # handle deprecated parameters
    #
    if random_color is not None:
        warn_once(
            'Parameter "random_color" is deprecated, set `color="random"` instead'
        )
        color = "random"
    if color_by_label is not None:
        warn_once(
            'Parameter "color_by_label" is deprecated, set `color="by_label"` instead'
        )
        color = "by_label"
    if detections is not None:
        warn_once('Parameter "detections" is deprecated, use "drawables" instead')
        drawables = detections
    if line_color is not None:
        warn_once('Parameter "line_color" is deprecated, use "color" instead')
        color = line_color
    if line_width is not None:
        warn_once('Parameter "line_width" is deprecated, use "thickness" instead')
        thickness = line_width
    if label_size is not None:
        warn_once('Parameter "label_size" is deprecated, use "text_size" instead')
        text_size = label_size
    # end

    if color is None:
        color = "by_id"
    if thickness is None:
        thickness = int(max(frame.shape) / 500)

    if drawables is None:
        return frame

    if text_color is not None:
        text_color = parse_color(text_color)

    for obj in drawables:
        if not isinstance(obj, Drawable):
            d = Drawable(obj)
        else:
            d = obj

        if color == "by_id":
            obj_color = Palette.choose_color(d.id)
        elif color == "by_label":
            obj_color = Palette.choose_color(d.label)
        elif color == "random":
            obj_color = Palette.choose_color(np.random.rand())
        else:
            obj_color = parse_color(color)

        points = d.points.astype(int)
        if draw_box:
            Drawer.rectangle(
                frame,
                tuple(points),
                color=obj_color,
                thickness=thickness,
            )

        text = _build_text(
            d, draw_labels=draw_labels, draw_ids=draw_ids, draw_scores=draw_scores
        )
        if text:
            if text_color is None:
                obj_text_color = obj_color
            else:
                obj_text_color = text_color
            # the anchor will become the bottom-left of the text,
            # we select-top left of the bbox compensating for the thickness of the box
            text_anchor = (
                points[0, 0] - thickness // 2,
                points[0, 1] - thickness // 2 - 1,
            )
            frame = Drawer.text(
                frame,
                text,
                position=text_anchor,
                size=text_size,
                color=obj_text_color,
                thickness=text_thickness,
            )

    return frame

draw_tracked_boxes(frame, objects, border_colors=None, border_width=None, id_size=None, id_thickness=None, draw_box=True, color_by_label=False, draw_labels=False, label_size=None, label_width=None) #

Deprecated. Use draw_box

Source code in norfair/drawing/draw_boxes.py
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
def draw_tracked_boxes(
    frame: np.ndarray,
    objects: Sequence["TrackedObject"],
    border_colors: Optional[Tuple[int, int, int]] = None,
    border_width: Optional[int] = None,
    id_size: Optional[int] = None,
    id_thickness: Optional[int] = None,
    draw_box: bool = True,
    color_by_label: bool = False,
    draw_labels: bool = False,
    label_size: Optional[int] = None,
    label_width: Optional[int] = None,
) -> np.array:
    "**Deprecated**. Use [`draw_box`][norfair.drawing.draw_boxes.draw_boxes]"
    warn_once("draw_tracked_boxes is deprecated, use draw_box instead")
    return draw_boxes(
        frame=frame,
        drawables=objects,
        color="by_label" if color_by_label else border_colors,
        thickness=border_width,
        text_size=label_size or id_size,
        text_thickness=id_thickness or label_width,
        draw_labels=draw_labels,
        draw_ids=id_size is not None and id_size > 0,
        draw_box=draw_box,
    )

color #

Color #

Contains predefined colors.

Colors are defined as a Tuple of integers between 0 and 255 expressing the values in BGR This is the format opencv uses.

Source code in norfair/drawing/color.py
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
class Color:
    """
    Contains predefined colors.

    Colors are defined as a Tuple of integers between 0 and 255 expressing the values in BGR
    This is the format opencv uses.
    """

    # from PIL.ImageColors.colormap
    aliceblue = hex_to_bgr("#f0f8ff")
    antiquewhite = hex_to_bgr("#faebd7")
    aqua = hex_to_bgr("#00ffff")
    aquamarine = hex_to_bgr("#7fffd4")
    azure = hex_to_bgr("#f0ffff")
    beige = hex_to_bgr("#f5f5dc")
    bisque = hex_to_bgr("#ffe4c4")
    black = hex_to_bgr("#000000")
    blanchedalmond = hex_to_bgr("#ffebcd")
    blue = hex_to_bgr("#0000ff")
    blueviolet = hex_to_bgr("#8a2be2")
    brown = hex_to_bgr("#a52a2a")
    burlywood = hex_to_bgr("#deb887")
    cadetblue = hex_to_bgr("#5f9ea0")
    chartreuse = hex_to_bgr("#7fff00")
    chocolate = hex_to_bgr("#d2691e")
    coral = hex_to_bgr("#ff7f50")
    cornflowerblue = hex_to_bgr("#6495ed")
    cornsilk = hex_to_bgr("#fff8dc")
    crimson = hex_to_bgr("#dc143c")
    cyan = hex_to_bgr("#00ffff")
    darkblue = hex_to_bgr("#00008b")
    darkcyan = hex_to_bgr("#008b8b")
    darkgoldenrod = hex_to_bgr("#b8860b")
    darkgray = hex_to_bgr("#a9a9a9")
    darkgrey = hex_to_bgr("#a9a9a9")
    darkgreen = hex_to_bgr("#006400")
    darkkhaki = hex_to_bgr("#bdb76b")
    darkmagenta = hex_to_bgr("#8b008b")
    darkolivegreen = hex_to_bgr("#556b2f")
    darkorange = hex_to_bgr("#ff8c00")
    darkorchid = hex_to_bgr("#9932cc")
    darkred = hex_to_bgr("#8b0000")
    darksalmon = hex_to_bgr("#e9967a")
    darkseagreen = hex_to_bgr("#8fbc8f")
    darkslateblue = hex_to_bgr("#483d8b")
    darkslategray = hex_to_bgr("#2f4f4f")
    darkslategrey = hex_to_bgr("#2f4f4f")
    darkturquoise = hex_to_bgr("#00ced1")
    darkviolet = hex_to_bgr("#9400d3")
    deeppink = hex_to_bgr("#ff1493")
    deepskyblue = hex_to_bgr("#00bfff")
    dimgray = hex_to_bgr("#696969")
    dimgrey = hex_to_bgr("#696969")
    dodgerblue = hex_to_bgr("#1e90ff")
    firebrick = hex_to_bgr("#b22222")
    floralwhite = hex_to_bgr("#fffaf0")
    forestgreen = hex_to_bgr("#228b22")
    fuchsia = hex_to_bgr("#ff00ff")
    gainsboro = hex_to_bgr("#dcdcdc")
    ghostwhite = hex_to_bgr("#f8f8ff")
    gold = hex_to_bgr("#ffd700")
    goldenrod = hex_to_bgr("#daa520")
    gray = hex_to_bgr("#808080")
    grey = hex_to_bgr("#808080")
    green = (0, 128, 0)
    greenyellow = hex_to_bgr("#adff2f")
    honeydew = hex_to_bgr("#f0fff0")
    hotpink = hex_to_bgr("#ff69b4")
    indianred = hex_to_bgr("#cd5c5c")
    indigo = hex_to_bgr("#4b0082")
    ivory = hex_to_bgr("#fffff0")
    khaki = hex_to_bgr("#f0e68c")
    lavender = hex_to_bgr("#e6e6fa")
    lavenderblush = hex_to_bgr("#fff0f5")
    lawngreen = hex_to_bgr("#7cfc00")
    lemonchiffon = hex_to_bgr("#fffacd")
    lightblue = hex_to_bgr("#add8e6")
    lightcoral = hex_to_bgr("#f08080")
    lightcyan = hex_to_bgr("#e0ffff")
    lightgoldenrodyellow = hex_to_bgr("#fafad2")
    lightgreen = hex_to_bgr("#90ee90")
    lightgray = hex_to_bgr("#d3d3d3")
    lightgrey = hex_to_bgr("#d3d3d3")
    lightpink = hex_to_bgr("#ffb6c1")
    lightsalmon = hex_to_bgr("#ffa07a")
    lightseagreen = hex_to_bgr("#20b2aa")
    lightskyblue = hex_to_bgr("#87cefa")
    lightslategray = hex_to_bgr("#778899")
    lightslategrey = hex_to_bgr("#778899")
    lightsteelblue = hex_to_bgr("#b0c4de")
    lightyellow = hex_to_bgr("#ffffe0")
    lime = hex_to_bgr("#00ff00")
    limegreen = hex_to_bgr("#32cd32")
    linen = hex_to_bgr("#faf0e6")
    magenta = hex_to_bgr("#ff00ff")
    maroon = hex_to_bgr("#800000")
    mediumaquamarine = hex_to_bgr("#66cdaa")
    mediumblue = hex_to_bgr("#0000cd")
    mediumorchid = hex_to_bgr("#ba55d3")
    mediumpurple = hex_to_bgr("#9370db")
    mediumseagreen = hex_to_bgr("#3cb371")
    mediumslateblue = hex_to_bgr("#7b68ee")
    mediumspringgreen = hex_to_bgr("#00fa9a")
    mediumturquoise = hex_to_bgr("#48d1cc")
    mediumvioletred = hex_to_bgr("#c71585")
    midnightblue = hex_to_bgr("#191970")
    mintcream = hex_to_bgr("#f5fffa")
    mistyrose = hex_to_bgr("#ffe4e1")
    moccasin = hex_to_bgr("#ffe4b5")
    navajowhite = hex_to_bgr("#ffdead")
    navy = hex_to_bgr("#000080")
    oldlace = hex_to_bgr("#fdf5e6")
    olive = hex_to_bgr("#808000")
    olivedrab = hex_to_bgr("#6b8e23")
    orange = hex_to_bgr("#ffa500")
    orangered = hex_to_bgr("#ff4500")
    orchid = hex_to_bgr("#da70d6")
    palegoldenrod = hex_to_bgr("#eee8aa")
    palegreen = hex_to_bgr("#98fb98")
    paleturquoise = hex_to_bgr("#afeeee")
    palevioletred = hex_to_bgr("#db7093")
    papayawhip = hex_to_bgr("#ffefd5")
    peachpuff = hex_to_bgr("#ffdab9")
    peru = hex_to_bgr("#cd853f")
    pink = hex_to_bgr("#ffc0cb")
    plum = hex_to_bgr("#dda0dd")
    powderblue = hex_to_bgr("#b0e0e6")
    purple = hex_to_bgr("#800080")
    rebeccapurple = hex_to_bgr("#663399")
    red = hex_to_bgr("#ff0000")
    rosybrown = hex_to_bgr("#bc8f8f")
    royalblue = hex_to_bgr("#4169e1")
    saddlebrown = hex_to_bgr("#8b4513")
    salmon = hex_to_bgr("#fa8072")
    sandybrown = hex_to_bgr("#f4a460")
    seagreen = hex_to_bgr("#2e8b57")
    seashell = hex_to_bgr("#fff5ee")
    sienna = hex_to_bgr("#a0522d")
    silver = hex_to_bgr("#c0c0c0")
    skyblue = hex_to_bgr("#87ceeb")
    slateblue = hex_to_bgr("#6a5acd")
    slategray = hex_to_bgr("#708090")
    slategrey = hex_to_bgr("#708090")
    snow = hex_to_bgr("#fffafa")
    springgreen = hex_to_bgr("#00ff7f")
    steelblue = hex_to_bgr("#4682b4")
    tan = hex_to_bgr("#d2b48c")
    teal = hex_to_bgr("#008080")
    thistle = hex_to_bgr("#d8bfd8")
    tomato = hex_to_bgr("#ff6347")
    turquoise = hex_to_bgr("#40e0d0")
    violet = hex_to_bgr("#ee82ee")
    wheat = hex_to_bgr("#f5deb3")
    white = hex_to_bgr("#ffffff")
    whitesmoke = hex_to_bgr("#f5f5f5")
    yellow = hex_to_bgr("#ffff00")
    yellowgreen = hex_to_bgr("#9acd32")

    # seaborn tab20 colors
    tab1 = hex_to_bgr("#1f77b4")
    tab2 = hex_to_bgr("#aec7e8")
    tab3 = hex_to_bgr("#ff7f0e")
    tab4 = hex_to_bgr("#ffbb78")
    tab5 = hex_to_bgr("#2ca02c")
    tab6 = hex_to_bgr("#98df8a")
    tab7 = hex_to_bgr("#d62728")
    tab8 = hex_to_bgr("#ff9896")
    tab9 = hex_to_bgr("#9467bd")
    tab10 = hex_to_bgr("#c5b0d5")
    tab11 = hex_to_bgr("#8c564b")
    tab12 = hex_to_bgr("#c49c94")
    tab13 = hex_to_bgr("#e377c2")
    tab14 = hex_to_bgr("#f7b6d2")
    tab15 = hex_to_bgr("#7f7f7f")
    tab16 = hex_to_bgr("#c7c7c7")
    tab17 = hex_to_bgr("#bcbd22")
    tab18 = hex_to_bgr("#dbdb8d")
    tab19 = hex_to_bgr("#17becf")
    tab20 = hex_to_bgr("#9edae5")
    # seaborn colorblind
    cb1 = hex_to_bgr("#0173b2")
    cb2 = hex_to_bgr("#de8f05")
    cb3 = hex_to_bgr("#029e73")
    cb4 = hex_to_bgr("#d55e00")
    cb5 = hex_to_bgr("#cc78bc")
    cb6 = hex_to_bgr("#ca9161")
    cb7 = hex_to_bgr("#fbafe4")
    cb8 = hex_to_bgr("#949494")
    cb9 = hex_to_bgr("#ece133")
    cb10 = hex_to_bgr("#56b4e9")

Palette #

Class to control the color pallete for drawing.

Examples:

Change palette:

>>> from norfair import Palette
>>> Palette.set("colorblind")
>>> # or a custom palette
>>> from norfair import Color
>>> Palette.set([Color.red, Color.blue, "#ffeeff"])
Source code in norfair/drawing/color.py
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
class Palette:
    """
    Class to control the color pallete for drawing.

    Examples
    --------
    Change palette:
    >>> from norfair import Palette
    >>> Palette.set("colorblind")
    >>> # or a custom palette
    >>> from norfair import Color
    >>> Palette.set([Color.red, Color.blue, "#ffeeff"])
    """

    _colors = PALETTES["tab10"]
    _default_color = Color.black

    @classmethod
    def set(cls, palette: Union[str, Iterable[ColorLike]]):
        """
        Selects a color palette.

        Parameters
        ----------
        palette : Union[str, Iterable[ColorLike]]
            can be either
            - the name of one of the predefined palettes `tab10`, `tab20`, or `colorblind`
            - a list of ColorLike objects that can be parsed by [`parse_color`][norfair.drawing.color.parse_color]
        """
        if isinstance(palette, str):
            try:
                cls._colors = PALETTES[palette]
            except KeyError as e:
                raise ValueError(
                    f"Invalid palette name '{palette}', valid values are {PALETTES.keys()}"
                ) from e
        else:
            colors = []
            for c in palette:
                colors.append(parse_color(c))

            cls._colors = colors

    @classmethod
    def set_default_color(cls, color: ColorLike):
        """
        Selects the default color of `choose_color` when hashable is None.

        Parameters
        ----------
        color : ColorLike
            The new default color.
        """
        cls._default_color = parse_color(color)

    @classmethod
    def choose_color(cls, hashable: Hashable) -> ColorType:
        if hashable is None:
            return cls._default_color
        return cls._colors[abs(hash(hashable)) % len(cls._colors)]

set(palette) classmethod #

Selects a color palette.

Parameters:

Name Type Description Default
palette Union[str, Iterable[ColorLike]]

can be either - the name of one of the predefined palettes tab10, tab20, or colorblind - a list of ColorLike objects that can be parsed by parse_color

required
Source code in norfair/drawing/color.py
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
@classmethod
def set(cls, palette: Union[str, Iterable[ColorLike]]):
    """
    Selects a color palette.

    Parameters
    ----------
    palette : Union[str, Iterable[ColorLike]]
        can be either
        - the name of one of the predefined palettes `tab10`, `tab20`, or `colorblind`
        - a list of ColorLike objects that can be parsed by [`parse_color`][norfair.drawing.color.parse_color]
    """
    if isinstance(palette, str):
        try:
            cls._colors = PALETTES[palette]
        except KeyError as e:
            raise ValueError(
                f"Invalid palette name '{palette}', valid values are {PALETTES.keys()}"
            ) from e
    else:
        colors = []
        for c in palette:
            colors.append(parse_color(c))

        cls._colors = colors

set_default_color(color) classmethod #

Selects the default color of choose_color when hashable is None.

Parameters:

Name Type Description Default
color ColorLike

The new default color.

required
Source code in norfair/drawing/color.py
355
356
357
358
359
360
361
362
363
364
365
@classmethod
def set_default_color(cls, color: ColorLike):
    """
    Selects the default color of `choose_color` when hashable is None.

    Parameters
    ----------
    color : ColorLike
        The new default color.
    """
    cls._default_color = parse_color(color)

hex_to_bgr(hex_value) #

Converts conventional 6 digits hex colors to BGR tuples

Parameters:

Name Type Description Default
hex_value str

hex value with leading # for instance "#ff0000"

required

Returns:

Type Description
Tuple[int, int, int]

BGR values

Raises:

Type Description
ValueError

if the string is invalid

Source code in norfair/drawing/color.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
def hex_to_bgr(hex_value: str) -> ColorType:
    """Converts conventional 6 digits hex colors to BGR tuples

    Parameters
    ----------
    hex_value : str
        hex value with leading `#` for instance `"#ff0000"`

    Returns
    -------
    Tuple[int, int, int]
        BGR values

    Raises
    ------
    ValueError
        if the string is invalid
    """
    if re.match("#[a-f0-9]{6}$", hex_value):
        return (
            int(hex_value[5:7], 16),
            int(hex_value[3:5], 16),
            int(hex_value[1:3], 16),
        )

    if re.match("#[a-f0-9]{3}$", hex_value):
        return (
            int(hex_value[3] * 2, 16),
            int(hex_value[2] * 2, 16),
            int(hex_value[1] * 2, 16),
        )
    raise ValueError(f"'{hex_value}' is not a valid color")

parse_color(color_like) #

Makes best effort to parse the given value to a Color

Parameters:

Name Type Description Default
color_like ColorLike

Can be one of:

  1. a string with the 6 digits hex value ("#ff0000")
  2. a string with one of the names defined in Colors ("red")
  3. a BGR tuple ((0, 0, 255))
required

Returns:

Type Description
Color

The BGR tuple.

Source code in norfair/drawing/color.py
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
def parse_color(color_like: ColorLike) -> ColorType:
    """Makes best effort to parse the given value to a Color

    Parameters
    ----------
    color_like : ColorLike
        Can be one of:

        1. a string with the 6 digits hex value (`"#ff0000"`)
        2. a string with one of the names defined in Colors (`"red"`)
        3. a BGR tuple (`(0, 0, 255)`)

    Returns
    -------
    Color
        The BGR tuple.
    """
    if isinstance(color_like, str):
        if color_like.startswith("#"):
            return hex_to_bgr(color_like)
        else:
            return getattr(Color, color_like)
    # TODO: validate?
    return tuple([int(v) for v in color_like])

path #

Paths #

Class that draws the paths taken by a set of points of interest defined from the coordinates of each tracker estimation.

Parameters:

Name Type Description Default
get_points_to_draw Optional[Callable[[array], array]]

Function that takes a list of points (the .estimate attribute of a TrackedObject) and returns a list of points for which we want to draw their paths.

By default it is the mean point of all the points in the tracker.

None
thickness Optional[int]

Thickness of the circles representing the paths of interest.

None
color Optional[Tuple[int, int, int]]

Color of the circles representing the paths of interest.

None
radius Optional[int]

Radius of the circles representing the paths of interest.

None
attenuation float

A float number in [0, 1] that dictates the speed at which the path is erased. if it is 0 then the path is never erased.

0.01

Examples:

>>> from norfair import Tracker, Video, Path
>>> video = Video("video.mp4")
>>> tracker = Tracker(...)
>>> path_drawer = Path()
>>> for frame in video:
>>>    detections = get_detections(frame)  # runs detector and returns Detections
>>>    tracked_objects = tracker.update(detections)
>>>    frame = path_drawer.draw(frame, tracked_objects)
>>>    video.write(frame)
Source code in norfair/drawing/path.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
class Paths:
    """
    Class that draws the paths taken by a set of points of interest defined from the coordinates of each tracker estimation.

    Parameters
    ----------
    get_points_to_draw : Optional[Callable[[np.array], np.array]], optional
        Function that takes a list of points (the `.estimate` attribute of a [`TrackedObject`][norfair.tracker.TrackedObject])
        and returns a list of points for which we want to draw their paths.

        By default it is the mean point of all the points in the tracker.
    thickness : Optional[int], optional
        Thickness of the circles representing the paths of interest.
    color : Optional[Tuple[int, int, int]], optional
        [Color][norfair.drawing.Color] of the circles representing the paths of interest.
    radius : Optional[int], optional
        Radius of the circles representing the paths of interest.
    attenuation : float, optional
        A float number in [0, 1] that dictates the speed at which the path is erased.
        if it is `0` then the path is never erased.

    Examples
    --------
    >>> from norfair import Tracker, Video, Path
    >>> video = Video("video.mp4")
    >>> tracker = Tracker(...)
    >>> path_drawer = Path()
    >>> for frame in video:
    >>>    detections = get_detections(frame)  # runs detector and returns Detections
    >>>    tracked_objects = tracker.update(detections)
    >>>    frame = path_drawer.draw(frame, tracked_objects)
    >>>    video.write(frame)
    """

    def __init__(
        self,
        get_points_to_draw: Optional[Callable[[np.array], np.array]] = None,
        thickness: Optional[int] = None,
        color: Optional[Tuple[int, int, int]] = None,
        radius: Optional[int] = None,
        attenuation: float = 0.01,
    ):
        if get_points_to_draw is None:

            def get_points_to_draw(points):
                return [np.mean(np.array(points), axis=0)]

        self.get_points_to_draw = get_points_to_draw

        self.radius = radius
        self.thickness = thickness
        self.color = color
        self.mask = None
        self.attenuation_factor = 1 - attenuation

    def draw(
        self, frame: np.ndarray, tracked_objects: Sequence[TrackedObject]
    ) -> np.array:
        """
        Draw the paths of the points interest on a frame.

        !!! warning
            This method does **not** draw frames in place as other drawers do, the resulting frame is returned.

        Parameters
        ----------
        frame : np.ndarray
            The OpenCV frame to draw on.
        tracked_objects : Sequence[TrackedObject]
            List of [`TrackedObject`][norfair.tracker.TrackedObject] to get the points of interest in order to update the paths.

        Returns
        -------
        np.array
            The resulting frame.
        """
        if self.mask is None:
            frame_scale = frame.shape[0] / 100

            if self.radius is None:
                self.radius = int(max(frame_scale * 0.7, 1))
            if self.thickness is None:
                self.thickness = int(max(frame_scale / 7, 1))

            self.mask = np.zeros(frame.shape, np.uint8)

        self.mask = (self.mask * self.attenuation_factor).astype("uint8")

        for obj in tracked_objects:
            if obj.abs_to_rel is not None:
                warn_once(
                    "It seems that your using the Path drawer together with MotionEstimator. This is not fully supported and the results will not be what's expected"
                )

            if self.color is None:
                color = Palette.choose_color(obj.id)
            else:
                color = self.color

            points_to_draw = self.get_points_to_draw(obj.estimate)

            for point in points_to_draw:
                self.mask = Drawer.circle(
                    self.mask,
                    position=tuple(point.astype(int)),
                    radius=self.radius,
                    color=color,
                    thickness=self.thickness,
                )

        return Drawer.alpha_blend(self.mask, frame, alpha=1, beta=1)

draw(frame, tracked_objects) #

Draw the paths of the points interest on a frame.

Warning

This method does not draw frames in place as other drawers do, the resulting frame is returned.

Parameters:

Name Type Description Default
frame ndarray

The OpenCV frame to draw on.

required
tracked_objects Sequence[TrackedObject]

List of TrackedObject to get the points of interest in order to update the paths.

required

Returns:

Type Description
array

The resulting frame.

Source code in norfair/drawing/path.py
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def draw(
    self, frame: np.ndarray, tracked_objects: Sequence[TrackedObject]
) -> np.array:
    """
    Draw the paths of the points interest on a frame.

    !!! warning
        This method does **not** draw frames in place as other drawers do, the resulting frame is returned.

    Parameters
    ----------
    frame : np.ndarray
        The OpenCV frame to draw on.
    tracked_objects : Sequence[TrackedObject]
        List of [`TrackedObject`][norfair.tracker.TrackedObject] to get the points of interest in order to update the paths.

    Returns
    -------
    np.array
        The resulting frame.
    """
    if self.mask is None:
        frame_scale = frame.shape[0] / 100

        if self.radius is None:
            self.radius = int(max(frame_scale * 0.7, 1))
        if self.thickness is None:
            self.thickness = int(max(frame_scale / 7, 1))

        self.mask = np.zeros(frame.shape, np.uint8)

    self.mask = (self.mask * self.attenuation_factor).astype("uint8")

    for obj in tracked_objects:
        if obj.abs_to_rel is not None:
            warn_once(
                "It seems that your using the Path drawer together with MotionEstimator. This is not fully supported and the results will not be what's expected"
            )

        if self.color is None:
            color = Palette.choose_color(obj.id)
        else:
            color = self.color

        points_to_draw = self.get_points_to_draw(obj.estimate)

        for point in points_to_draw:
            self.mask = Drawer.circle(
                self.mask,
                position=tuple(point.astype(int)),
                radius=self.radius,
                color=color,
                thickness=self.thickness,
            )

    return Drawer.alpha_blend(self.mask, frame, alpha=1, beta=1)

AbsolutePaths #

Class that draws the absolute paths taken by a set of points.

Works just like Paths but supports camera motion.

Warning

This drawer is not optimized so it can be stremely slow. Performance degrades linearly with max_history * number_of_tracked_objects.

Parameters:

Name Type Description Default
get_points_to_draw Optional[Callable[[array], array]]

Function that takes a list of points (the .estimate attribute of a TrackedObject) and returns a list of points for which we want to draw their paths.

By default it is the mean point of all the points in the tracker.

None
thickness Optional[int]

Thickness of the circles representing the paths of interest.

None
color Optional[Tuple[int, int, int]]

Color of the circles representing the paths of interest.

None
radius Optional[int]

Radius of the circles representing the paths of interest.

None
max_history int

Number of past points to include in the path. High values make the drawing slower

20

Examples:

>>> from norfair import Tracker, Video, Path
>>> video = Video("video.mp4")
>>> tracker = Tracker(...)
>>> path_drawer = Path()
>>> for frame in video:
>>>    detections = get_detections(frame)  # runs detector and returns Detections
>>>    tracked_objects = tracker.update(detections)
>>>    frame = path_drawer.draw(frame, tracked_objects)
>>>    video.write(frame)
Source code in norfair/drawing/path.py
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
class AbsolutePaths:
    """
    Class that draws the absolute paths taken by a set of points.

    Works just like [`Paths`][norfair.drawing.Paths] but supports camera motion.

    !!! warning
        This drawer is not optimized so it can be stremely slow. Performance degrades linearly with
        `max_history * number_of_tracked_objects`.

    Parameters
    ----------
    get_points_to_draw : Optional[Callable[[np.array], np.array]], optional
        Function that takes a list of points (the `.estimate` attribute of a [`TrackedObject`][norfair.tracker.TrackedObject])
        and returns a list of points for which we want to draw their paths.

        By default it is the mean point of all the points in the tracker.
    thickness : Optional[int], optional
        Thickness of the circles representing the paths of interest.
    color : Optional[Tuple[int, int, int]], optional
        [Color][norfair.drawing.Color] of the circles representing the paths of interest.
    radius : Optional[int], optional
        Radius of the circles representing the paths of interest.
    max_history : int, optional
        Number of past points to include in the path. High values make the drawing slower

    Examples
    --------
    >>> from norfair import Tracker, Video, Path
    >>> video = Video("video.mp4")
    >>> tracker = Tracker(...)
    >>> path_drawer = Path()
    >>> for frame in video:
    >>>    detections = get_detections(frame)  # runs detector and returns Detections
    >>>    tracked_objects = tracker.update(detections)
    >>>    frame = path_drawer.draw(frame, tracked_objects)
    >>>    video.write(frame)
    """

    def __init__(
        self,
        get_points_to_draw: Optional[Callable[[np.array], np.array]] = None,
        thickness: Optional[int] = None,
        color: Optional[Tuple[int, int, int]] = None,
        radius: Optional[int] = None,
        max_history=20,
    ):

        if get_points_to_draw is None:

            def get_points_to_draw(points):
                return [np.mean(np.array(points), axis=0)]

        self.get_points_to_draw = get_points_to_draw

        self.radius = radius
        self.thickness = thickness
        self.color = color
        self.past_points = defaultdict(lambda: [])
        self.max_history = max_history
        self.alphas = np.linspace(0.99, 0.01, max_history)

    def draw(self, frame, tracked_objects, coord_transform=None):
        frame_scale = frame.shape[0] / 100

        if self.radius is None:
            self.radius = int(max(frame_scale * 0.7, 1))
        if self.thickness is None:
            self.thickness = int(max(frame_scale / 7, 1))
        for obj in tracked_objects:
            if not obj.live_points.any():
                continue

            if self.color is None:
                color = Palette.choose_color(obj.id)
            else:
                color = self.color

            points_to_draw = self.get_points_to_draw(obj.get_estimate(absolute=True))

            for point in coord_transform.abs_to_rel(points_to_draw):
                Drawer.circle(
                    frame,
                    position=tuple(point.astype(int)),
                    radius=self.radius,
                    color=color,
                    thickness=self.thickness,
                )

            last = points_to_draw
            for i, past_points in enumerate(self.past_points[obj.id]):
                overlay = frame.copy()
                last = coord_transform.abs_to_rel(last)
                for j, point in enumerate(coord_transform.abs_to_rel(past_points)):
                    Drawer.line(
                        overlay,
                        tuple(last[j].astype(int)),
                        tuple(point.astype(int)),
                        color=color,
                        thickness=self.thickness,
                    )
                last = past_points

                alpha = self.alphas[i]
                frame = Drawer.alpha_blend(overlay, frame, alpha=alpha)
            self.past_points[obj.id].insert(0, points_to_draw)
            self.past_points[obj.id] = self.past_points[obj.id][: self.max_history]
        return frame

fixed_camera #

FixedCamera #

Class used to stabilize video based on the camera motion.

Starts with a larger frame, where the original frame is drawn on top of a black background. As the camera moves, the smaller frame moves in the opposite direction, stabilizing the objects in it.

Useful for debugging or demoing the camera motion. Example GIF

Warning

This only works with TranslationTransformation, using HomographyTransformation will result in unexpected behaviour.

Warning

If using other drawers, always apply this one last. Using other drawers on the scaled up frame will not work as expected.

Note

Sometimes the camera moves so far from the original point that the result won't fit in the scaled-up frame. In this case, a warning will be logged and the frames will be cropped to avoid errors.

Parameters:

Name Type Description Default
scale float

The resulting video will have a resolution of scale * (H, W) where HxW is the resolution of the original video. Use a bigger scale if the camera is moving too much.

2
attenuation float

Controls how fast the older frames fade to black.

0.05

Examples:

>>> # setup
>>> tracker = Tracker("frobenious", 100)
>>> motion_estimator = MotionEstimator()
>>> video = Video(input_path="video.mp4")
>>> fixed_camera = FixedCamera()
>>> # process video
>>> for frame in video:
>>>     coord_transformations = motion_estimator.update(frame)
>>>     detections = get_detections(frame)
>>>     tracked_objects = tracker.update(detections, coord_transformations)
>>>     draw_tracked_objects(frame, tracked_objects)  # fixed_camera should always be the last drawer
>>>     bigger_frame = fixed_camera.adjust_frame(frame, coord_transformations)
>>>     video.write(bigger_frame)
Source code in norfair/drawing/fixed_camera.py
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
class FixedCamera:
    """
    Class used to stabilize video based on the camera motion.

    Starts with a larger frame, where the original frame is drawn on top of a black background.
    As the camera moves, the smaller frame moves in the opposite direction, stabilizing the objects in it.

    Useful for debugging or demoing the camera motion.
    ![Example GIF](../../videos/camera_stabilization.gif)

    !!! Warning
        This only works with [`TranslationTransformation`][norfair.camera_motion.TranslationTransformation],
        using [`HomographyTransformation`][norfair.camera_motion.HomographyTransformation] will result in
        unexpected behaviour.

    !!! Warning
        If using other drawers, always apply this one last. Using other drawers on the scaled up frame will not work as expected.

    !!! Note
        Sometimes the camera moves so far from the original point that the result won't fit in the scaled-up frame.
        In this case, a warning will be logged and the frames will be cropped to avoid errors.

    Parameters
    ----------
    scale : float, optional
        The resulting video will have a resolution of `scale * (H, W)` where HxW is the resolution of the original video.
        Use a bigger scale if the camera is moving too much.
    attenuation : float, optional
        Controls how fast the older frames fade to black.

    Examples
    --------
    >>> # setup
    >>> tracker = Tracker("frobenious", 100)
    >>> motion_estimator = MotionEstimator()
    >>> video = Video(input_path="video.mp4")
    >>> fixed_camera = FixedCamera()
    >>> # process video
    >>> for frame in video:
    >>>     coord_transformations = motion_estimator.update(frame)
    >>>     detections = get_detections(frame)
    >>>     tracked_objects = tracker.update(detections, coord_transformations)
    >>>     draw_tracked_objects(frame, tracked_objects)  # fixed_camera should always be the last drawer
    >>>     bigger_frame = fixed_camera.adjust_frame(frame, coord_transformations)
    >>>     video.write(bigger_frame)
    """

    def __init__(self, scale: float = 2, attenuation: float = 0.05):
        self.scale = scale
        self._background = None
        self._attenuation_factor = 1 - attenuation

    def adjust_frame(
        self, frame: np.ndarray, coord_transformation: TranslationTransformation
    ) -> np.ndarray:
        """
        Render scaled up frame.

        Parameters
        ----------
        frame : np.ndarray
            The OpenCV frame.
        coord_transformation : TranslationTransformation
            The coordinate transformation as returned by the [`MotionEstimator`][norfair.camera_motion.MotionEstimator]

        Returns
        -------
        np.ndarray
            The new bigger frame with the original frame drawn on it.
        """

        # initialize background if necessary
        if self._background is None:
            original_size = (
                frame.shape[1],
                frame.shape[0],
            )  # OpenCV format is (width, height)

            scaled_size = tuple(
                (np.array(original_size) * np.array(self.scale)).round().astype(int)
            )
            self._background = np.zeros(
                [scaled_size[1], scaled_size[0], frame.shape[-1]],
                frame.dtype,
            )
        else:
            self._background = (self._background * self._attenuation_factor).astype(
                frame.dtype
            )

        # top_left is the anchor coordinate from where we start drawing the fame on top of the background
        # aim to draw it in the center of the background but transformations will move this point
        top_left = (
            np.array(self._background.shape[:2]) // 2 - np.array(frame.shape[:2]) // 2
        )
        top_left = (
            coord_transformation.rel_to_abs(top_left[::-1]).round().astype(int)[::-1]
        )
        # box of the background that will be updated and the limits of it
        background_y0, background_y1 = (top_left[0], top_left[0] + frame.shape[0])
        background_x0, background_x1 = (top_left[1], top_left[1] + frame.shape[1])
        background_size_y, background_size_x = self._background.shape[:2]

        # define box of the frame that will be used
        # if the scale is not enough to support the movement, warn the user but keep drawing
        # cropping the frame so that the operation doesn't fail
        frame_y0, frame_y1, frame_x0, frame_x1 = (0, frame.shape[0], 0, frame.shape[1])
        if (
            background_y0 < 0
            or background_x0 < 0
            or background_y1 > background_size_y
            or background_x1 > background_size_x
        ):
            warn_once(
                "moving_camera_scale is not enough to cover the range of camera movement, frame will be cropped"
            )
            # crop left or top of the frame if necessary
            frame_y0 = max(-background_y0, 0)
            frame_x0 = max(-background_x0, 0)
            # crop right or bottom of the frame if necessary
            frame_y1 = max(
                min(background_size_y - background_y0, background_y1 - background_y0), 0
            )
            frame_x1 = max(
                min(background_size_x - background_x0, background_x1 - background_x0), 0
            )
            # handle cases where the limits of the background become negative which numpy will interpret incorrectly
            background_y0 = max(background_y0, 0)
            background_x0 = max(background_x0, 0)
            background_y1 = max(background_y1, 0)
            background_x1 = max(background_x1, 0)
        self._background[
            background_y0:background_y1, background_x0:background_x1, :
        ] = frame[frame_y0:frame_y1, frame_x0:frame_x1, :]
        return self._background

adjust_frame(frame, coord_transformation) #

Render scaled up frame.

Parameters:

Name Type Description Default
frame ndarray

The OpenCV frame.

required
coord_transformation TranslationTransformation

The coordinate transformation as returned by the MotionEstimator

required

Returns:

Type Description
ndarray

The new bigger frame with the original frame drawn on it.

Source code in norfair/drawing/fixed_camera.py
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
def adjust_frame(
    self, frame: np.ndarray, coord_transformation: TranslationTransformation
) -> np.ndarray:
    """
    Render scaled up frame.

    Parameters
    ----------
    frame : np.ndarray
        The OpenCV frame.
    coord_transformation : TranslationTransformation
        The coordinate transformation as returned by the [`MotionEstimator`][norfair.camera_motion.MotionEstimator]

    Returns
    -------
    np.ndarray
        The new bigger frame with the original frame drawn on it.
    """

    # initialize background if necessary
    if self._background is None:
        original_size = (
            frame.shape[1],
            frame.shape[0],
        )  # OpenCV format is (width, height)

        scaled_size = tuple(
            (np.array(original_size) * np.array(self.scale)).round().astype(int)
        )
        self._background = np.zeros(
            [scaled_size[1], scaled_size[0], frame.shape[-1]],
            frame.dtype,
        )
    else:
        self._background = (self._background * self._attenuation_factor).astype(
            frame.dtype
        )

    # top_left is the anchor coordinate from where we start drawing the fame on top of the background
    # aim to draw it in the center of the background but transformations will move this point
    top_left = (
        np.array(self._background.shape[:2]) // 2 - np.array(frame.shape[:2]) // 2
    )
    top_left = (
        coord_transformation.rel_to_abs(top_left[::-1]).round().astype(int)[::-1]
    )
    # box of the background that will be updated and the limits of it
    background_y0, background_y1 = (top_left[0], top_left[0] + frame.shape[0])
    background_x0, background_x1 = (top_left[1], top_left[1] + frame.shape[1])
    background_size_y, background_size_x = self._background.shape[:2]

    # define box of the frame that will be used
    # if the scale is not enough to support the movement, warn the user but keep drawing
    # cropping the frame so that the operation doesn't fail
    frame_y0, frame_y1, frame_x0, frame_x1 = (0, frame.shape[0], 0, frame.shape[1])
    if (
        background_y0 < 0
        or background_x0 < 0
        or background_y1 > background_size_y
        or background_x1 > background_size_x
    ):
        warn_once(
            "moving_camera_scale is not enough to cover the range of camera movement, frame will be cropped"
        )
        # crop left or top of the frame if necessary
        frame_y0 = max(-background_y0, 0)
        frame_x0 = max(-background_x0, 0)
        # crop right or bottom of the frame if necessary
        frame_y1 = max(
            min(background_size_y - background_y0, background_y1 - background_y0), 0
        )
        frame_x1 = max(
            min(background_size_x - background_x0, background_x1 - background_x0), 0
        )
        # handle cases where the limits of the background become negative which numpy will interpret incorrectly
        background_y0 = max(background_y0, 0)
        background_x0 = max(background_x0, 0)
        background_y1 = max(background_y1, 0)
        background_x1 = max(background_x1, 0)
    self._background[
        background_y0:background_y1, background_x0:background_x1, :
    ] = frame[frame_y0:frame_y1, frame_x0:frame_x1, :]
    return self._background

absolute_grid #

draw_absolute_grid(frame, coord_transformations, grid_size=20, radius=2, thickness=1, color=Color.black, polar=False) #

Draw a grid of points in absolute coordinates.

Useful for debugging camera motion.

The points are drawn as if the camera were in the center of a sphere and points are drawn in the intersection of latitude and longitude lines over the surface of the sphere.

Parameters:

Name Type Description Default
frame ndarray

The OpenCV frame to draw on.

required
coord_transformations CoordinatesTransformation

The coordinate transformation as returned by the MotionEstimator

required
grid_size int

How many points to draw.

20
radius int

Size of each point.

2
thickness int

Thickness of each point

1
color ColorType

Color of the points.

black
polar Bool

If True, the points on the first frame are drawn as if the camera were pointing to a pole (viewed from the center of the earth). By default, False is used which means the points are drawn as if the camera were pointing to the Equator.

False
Source code in norfair/drawing/absolute_grid.py
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
def draw_absolute_grid(
    frame: np.ndarray,
    coord_transformations: CoordinatesTransformation,
    grid_size: int = 20,
    radius: int = 2,
    thickness: int = 1,
    color: ColorType = Color.black,
    polar: bool = False,
):
    """
    Draw a grid of points in absolute coordinates.

    Useful for debugging camera motion.

    The points are drawn as if the camera were in the center of a sphere and points are drawn in the intersection
    of latitude and longitude lines over the surface of the sphere.

    Parameters
    ----------
    frame : np.ndarray
        The OpenCV frame to draw on.
    coord_transformations : CoordinatesTransformation
        The coordinate transformation as returned by the [`MotionEstimator`][norfair.camera_motion.MotionEstimator]
    grid_size : int, optional
        How many points to draw.
    radius : int, optional
        Size of each point.
    thickness : int, optional
        Thickness of each point
    color : ColorType, optional
        Color of the points.
    polar : Bool, optional
        If True, the points on the first frame are drawn as if the camera were pointing to a pole (viewed from the center of the earth).
        By default, False is used which means the points are drawn as if the camera were pointing to the Equator.
    """
    h, w, _ = frame.shape

    # get absolute points grid
    points = _get_grid(grid_size, w, h, polar=polar)

    # transform the points to relative coordinates
    if coord_transformations is None:
        points_transformed = points
    else:
        points_transformed = coord_transformations.abs_to_rel(points)

    # filter points that are not visible
    visible_points = points_transformed[
        (points_transformed <= np.array([w, h])).all(axis=1)
        & (points_transformed >= 0).all(axis=1)
    ]
    for point in visible_points:
        Drawer.cross(
            frame, point.astype(int), radius=radius, thickness=thickness, color=color
        )