Lucas-Kanade Optical Flow not able to track feature points across frames

Solution for Lucas-Kanade Optical Flow not able to track feature points across frames
is Given Below:

I’ve found the feature points on the initial frame using the Shi-Tomasi goodFeaturesToTrack(). But the issue is that calcOpticalFlowPyrLK() is not able to consistently track the feature points across the frames.

Here are the feature points detected on the initial frame: Frame 1

And here’s the optical flow of the second frame: Frame 2

As you can see the feature points on the vehicles are completely off. Here’s Frame 3 and Frame 4 for further reference.

And no, it’s not an error in my code in plotting the feature points. When I run the same code but do it on the masks of the vehicle, it works fine. Frame 1, Frame 2, Frame 3, Frame 4. Some feature points change because the contour of the mask keeps changing, but apart from that, the optical flow of the feature points across the frames is fairly consistent.

def optical_flow(img_list):
    # Parameters for ShiTomasi corner detection
    feature_params = dict(maxCorners=200, qualityLevel=0.4, minDistance=7, blockSize=7)

    # Parameters for Lucas Kanade optical flow
    lk_params = dict(
        winSize=(15, 15),
        maxLevel=2,
        criteria=(cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 10, 0.03),
    )

    # Create random colors
    color = np.random.randint(0, 255, (300, 3))

    # Take first frame and find corners in it
    old_frame = img_list[0]
    old_gray = cv2.cvtColor(old_frame, cv2.COLOR_BGR2GRAY)

    box = bb_list[0][0]
    p1x = int(box[0])
    p1y = int(box[1])
    p2x = int(box[2])
    p2y = int(box[3])

    roi = old_gray[p1y:p2y, p1x:p2x]
    roi = cv2.resize(roi, None, fx= 2, fy= 2, interpolation= cv2.INTER_LINEAR)

    p0 = cv2.goodFeaturesToTrack(roi, mask=None, **feature_params)
    

    for point in p0:
        x, y = point.ravel()
        cv2.circle(roi,(x, y),4,(0,0,255), -1)
    
    cv2.imshow("frame", roi)
    cv2.waitKey(0)


    print(p0)
    p0 = scale_p0(p0)
    print(p0)
    p0 = bb_offset(p0, box)
    
    old_vis = old_frame.copy()
    for point in p0:
        x, y = point.ravel()
        cv2.circle(old_vis,(x, y),4,(0,0,255), -1)
    
    cv2.imshow("frame", old_vis)
    cv2.waitKey(0)

    
    # Create a mask image for drawing purposes
    mask = np.zeros_like(old_frame)

    optical_flow_orientation_list = []
    j = int(0)
    for frame, boxes, labels, scores in zip(img_list[1:], bb_list[1:], labels_list[1:], scores_list[1:]):
        # Read new frame
        frame_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

        #visualize_bb(boxes, labels, scores, frame)

        # Calculate Optical Flow
        p1, st, err = cv2.calcOpticalFlowPyrLK(
            old_gray, frame_gray, p0, None, **lk_params
        )
        # Select good points
        good_new = p1[st == 1]
        good_old = p0[st == 1]

        # Draw the tracks
        for i, (new, old) in enumerate(zip(good_new, good_old)):
            a, b = new.ravel()
            c, d = old.ravel()
            optical_flow_orientation_list.append(calculate_perspective(a, b, c, d))
        
        # finding the outlier optical flow vectors
        is_outlier_list = is_outlier(np.array(optical_flow_orientation_list))
        
        for i, (is_true, new, old) in enumerate(zip(is_outlier_list, good_new, good_old)):
            a, b = new.ravel()
            c, d = old.ravel()
            if not is_true:
                mask = cv2.line(mask, (a, b), (c, d), color[i].tolist(), 2)
                frame = cv2.circle(frame, (a, b), 5, color[i].tolist(), -1)

        # Display the demo
        img = cv2.add(frame, mask)
        cv2.imshow("frame", img)
        #k = cv2.waitKey(25) & 0xFF
        k = cv2.waitKey(0)
        if k == 27:
            break
        
        j += 1
        # Update the previous frame and previous points
        old_gray = frame_gray.copy()
        p0 = good_new.reshape(-1, 1, 2)

Here’s the optical flow function that I’ve written. I’m finding the feature points inside the bounding box region and then finding optical flow. I’m only keeping track of those optical flow vectors whose orientation is within a certain threshold of the median of the optical flow vector orientations.