A “fast-moving object” (FOM) in the world of image detection is defined as one whose motion is faster than can be captured by a single image, and will result in a blurred “streak”.
This poses some challenges when trying to detect its speed and trajectory, as well as estimating future trajectory and impact.
Typical examples are in the world of sport, and using the simplifying assumption that the object is a sphere (i.e., a ball – as is commonly the case) Rozumnyi et al. in “The World of Fast Moving Objects” have devised an algorithm to detect FOMs in video streams.
Using OpenCV and some simple image manipulation, I have implemented the first of their three-stages implementation, the so-called “detector” which allows an initial, fast, but approximate detection.
Detection of Motion
The first step requires the use of three frames, to compute their binary difference and isolate all moving objects in the frame:
# Convert a video frame to a binary (grayscale) image. cv2.imread(IMG_TEMPLATE.format(idx), cv2.IMREAD_GRAYSCALE) ... # im_t is the frame of interest; im_tp1 and im_tm1 are, respectively # the successive and previous frames. delta_plus = cv2.absdiff(im_t, im_tm1) delta_0 = cv2.absdiff(im_tp1, im_tm1) delta_minus = cv2.absdiff(im_t,im_tp1)
finally, after “cleaning up” the binary images from noise, using OpenCV thresholding functions, one can derive a “detect” image, and detect bounding boxes for all FOM candidates:
detect = cv2.bitwise_not( cv2.bitwise_and(cv2.bitwise_and(dbp, dbm), cv2.bitwise_not(db0))) # The original `detect` image was suitable for display, # but it is "inverted" and not suitable # for component detection; we need to invert it first. nd = cv2.bitwise_not(detect) num, labels, stats, centroids = cv2.connectedComponentsWithStats( nd, ltype=cv2.CV_16U)
We seem to be, so far, on to something:
The final step is, for each component, to estimate the area that the detected path would imply for a given motion model (essentialy, a moving sphere along a linear path) and the actual area which was captured by the binary difference:
# We estimate the path length as the max possible length in # the bounding box: its diagonal. path_len = math.sqrt(w * w + h * h) expected_area = radius * (2 * path_len + math.pi * radius) area_ratio = abs(actual_area / expected_area - 1) if area_ratio < gamma: fom_detected = True
gamma is a tuning threshold (which the paper set at 0.2, but I have found to work better in our case when set at 0.3).
Once a FOM is detected, its bounding rectangle is its location and further computations (e.g. trajectory and impact detection) can be executed across, for example, frames.
Full code example
The full Jupyter notebook is available on my github repository, with all the code.
Installing OpenCV on Ubuntu 17.10
It is not as straightforward as one may hope: I have written up a script to automate the process. YMMV.