# Quadrotor and openCV in python

I bought a new toy for myself the other week: a Hubsan x4 H107C, which is a really cheap small quadrotor with a 0.3MB videocamera on the nose.  I’m considering building a bigger quadrotor of my own, so bought this little guy to see if I like them enough before investing bigger (and to learn to fly on this one is much cheaper than a big quad).

I’m also trying out openCV – python, a computer vision package, and so these two items seemed a great combination.  I flew a bit yesterday down by the bay, and got a funny clip of video of the quad getting over the treeline, being blasted by a gust of wind, and crashing about 300 feet away from me (all the action is around 1:23 : end of the video).

The main exercise was to read a video into python with openCV, manipulate it frame by frame, and then get a video output.   I learned a decent amount about video files along the way, like .avi is a really outdated container, codecs are not containers, and ffmpeg is a much better way to stitch images into a video than openCV, which is better at analyzing and manipulating standalone images.  The python code at the bottom creates the three videos, each showing a different (commented) tool in openCV: histogram equalization, Harris corner detection, and Shi-Tomasi corner detection.  Note the ffmpeg line at the bottom of the python code, after hours of trying to stitch the images into a video with openCV, I realized its much easier and more efficient to just save each frame out as a .png, and then use ffmpeg to stitch those into the output video.  openCV only creates .avi containers, while ffmpeg creates most file types and has the utility of about any codec one can imagine. OpenCV does some pretty neat stuff, including object and facial recognition and tracking. And apparently it does run on python, though I’m not sure how intensive some of its algorithms are.

#############################
## This code processes an .avi video container image by image, and saves the
## resultant images in a subdirectory.  The goal is to re-combine the processed
## images using software more suited (not openCV) such as ffmpeg
##
##    cap := video input
##    Image_"i" := list of images for output (will get large)
##
## Precondition:   need an .avi video in the working directory
## Postcondition:  a series of Image_xxxx.png's in the same working directory
##                with some filter applied (code to make video at bottom)
##
## Much of this code has been sourced and augmented from openCV tutorials:
## please cite 'diffusecreation.com' and the above if you choose to use
## or modify this code.
##
## Lewis Guignard
## [email protected]
###############################
import numpy as np
import cv2

filenameNoIndex = 'Image_'  #output image file prefix
i=0
video = raw_input('Please enter the video filename, you sexy beast: ')
cap = cv2.VideoCapture(video)

while(cap.isOpened()):
if ret==True:
####################################3
i+=1
if i == 2120: # cut off after some frames
break
#real time feedback on progress, setup filename
if i%10==0:
print 'i = ' + str(i)
#make string of filename
index = str(i)
while(len(index) != 4):
index = '0' + index
filenameIndex = filenameNoIndex + index + '.png'
####################################
grayframe = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

## uncomment one of the three filters below:
#########################
### Harris corner Detector
##        grayframe = np.float32(grayframe) #need float32 datatype for this function
##        dst = cv2.cornerHarris(grayframe, 2,5,0.04)
##        #result is dilated for marking the corners, not important
##        dst = cv2.dilate(dst,None)
##
##        # Threshold for an optimal value, it may vary depending on the image.
##        frame[dst>0.05*dst.max()]=[0,0,255]
#####################

########################
### Shi-Tomasi Corner Detector
corners = cv2.goodFeaturesToTrack(grayframe, 50,0.01,10)
corners = np.int0(corners)
for j in corners:
x,y = j.ravel()
cv2.circle(frame,(x,y),3,(0,0,255),-1)
#########################

#####################
### Take Histogram of image (res dimensions not matching?)
##        equ = cv2.equalizeHist(grayframe)
##        res = np.hstack((frame,equ)) #side by side stack of image
##        frame = equ
######################

# write the frame
cv2.imwrite(filenameIndex, frame)
# display the frame
cv2.imshow('frame',frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break

# Release everything if job is finished
cap.release()
cv2.destroyAllWindows()

############# upon completion (maybe later put this in a script and call it):
#run this in the terminal to make a .mp4 of the output images, and then delete images:
#ffmpeg -framerate 20 -i Image_%04d.png -c:v libx264 -r 30 -pix_fmt yuv420p out.mp4
#rm Ima*
#############


This site uses Akismet to reduce spam. Learn how your comment data is processed.