Question Details

No question body available.

Tags

python opencv face-detection mediapipe

Answers (1)

Accepted Answer Available
Accepted Answer
March 7, 2026 Score: 5 Rep: 1,076 Quality: High Completeness: 80%

You have to use the detector you created.

Just because the running mode is live stream doesn't mean the detector object knows which live stream. You've directed data out of it with the callback, but haven't directed data into it. Consider that you need to create the cap object because it refers to a specific live stream - you can even switch the stream or feed a mix of multiple streams into your detector if you wanted.

Looks like you used code from one section of Google's guide? The subsequent sections tell you how to feed frames into the detector:

mpimage = mp.Image(imageformat=mp.ImageFormat.SRGB, data=frame)
detector.detectasync(mpimage, timestampms)

Notice that the detector works asynchronously here. This is standard practice as detection can be slow and block the program (or a website) from doing something else in the meanwhile, e.g. responding to some user inputs. It's also why you're giving it a callback - once you're done processing a frame, you do whatever is needed with the result. You can put imshow and anything related like drawing a detected bounding box into the callback as well.

The variable timestampms is necessary because of this async approach to give an ordering to outputs (which are fed into the callback function). Just set it to 0 before your loop and update it in the loop by however many milliseconds you wait.