Fox Row Rowing, computers, astronomy


Online QR code reader

Choose an image to read a QR code:

QR-decoding code courtesy of LazarSoft, available under the Apache License.

Detecting lines in an image with Hough transforms

Part of a series of examples from the CV cheat sheet. Click here for other computer vision applications.

Detecting straight lines in an image is a common task in computer vision. One way to do this is with the Hough (pronounced “Huff”) transform. The math behind it is beyond the scope of this example, but we should be able to get you off the ground using it with OpenCV and Python. We’re going to assume you have OpenCV installed already and are able to use it in Python.

Take an input image, say, this basketball court:

Credit: Wikimedia Commons

Let’s detect the straight lines in this image. The gist is this:

  1. Open an image
  2. Detect edges in it
  3. Use those edges to detect straight lines
  4. Draw those lines back on the image (you don’t need to do this necessarily, but it’s a nice visual)

Here’s the code to extract lines in Python:

# import libraries we'll need
import cv2
import numpy as np

# read in your image
img = cv2.imread('court.jpg')

# convert to grayscale so we can detect edges
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

# use Canny edge detector to find edges in the image.  The thresholds determine how
# weak or strong an edge will be detected.  These can be tweaked.
lower_threshold = 50
upper_threshold = 150
edges = cv2.Canny(gray, lower_threshold, upper_threshold)

# detect lines in the image.  This is where the real work is done.  Higher threshold
# means a line needs to be stronger to be detected, so again, this can be tweaked.
threshold = 250
lines = cv2.HoughLines(edges, 1, np.pi / 180, threshold)

# convert each line to coordinates back in the original image
for line in lines:
    for rho, theta in line:
        a = np.cos(theta)
        b = np.sin(theta)
        x0 = a * rho
        y0 = b * rho
        x1 = int(x0 + 1000 * -b)
        y1 = int(y0 + 1000 * a)
        x2 = int(x0 - 1000 * -b)
        y2 = int(y0 - 1000 * a)

        # draw each line on the image
        cv2.line(img, (x1, y1), (x2, y2), (0, 0, 255), 1)

# write the image to disk
cv2.imwrite('houghlines.jpg', img)

Here’s the result: hough lines We can see it found the majority of lines on the court - and quite a few in the skyline. This is why you’ll want to tune the threshold in cv2.HoughLines(edges, 1, np.pi / 180, threshold) based on your needs to prevent false positives or negatives.

Computer vision cheat sheet

Computer vision is a broad field - there are many algorithms and it can be hard to tell what’s the best way to attack a given problem. This aims to provide a guide on what to use.


What this isn’t

  • The bleeding edge. CV moves very quickly. I haven’t always heard about the newest bestest way to analyze $YOUR_FAVORITE_THING. Email me and I’ll take it under consideration though!
  • The best way. “Best” often isn’t even well-defined. Preprocessing time, CPU usage, memory usage, accuracy, and simplicity are all axes you might value differently for different applications. All else equal, I will try and show the simplest way to implement a solution.
  • Comprehensive. There are many, many, areas in CV. Covering them all is beyond the scope of this guide.

What this is

  • An illustration of what is possible. It isn’t always obvious a task can be done, let alone knowing what magic words to Google. I want to provide an overview of what is out there.
  • Translation. Often you know what you want, but no way to know that it’s called, say, histogram backprojection. This should help bridge that gap.
  • A living document. I’ll be adding entries, updating, and adding examples. If you have suggestions for more, let me know!


I want to...You should use...
Detect lines in an imageHough Transform
Detect circles in an imageCircular Hough transform
Find images similar to the one I havephash, dhash
Find images containing a particular objectFeature detector - orb, sift, surf, kaze/akaze
Find images containing any particular objectHaar cascade, convolutional neural net
Track an object in a videoMeanshift, camshift
Detect faces in an imageHaar cascade
Detect people in an imageHistogram of Oriented Gradients (HOG) people detector
Remove defects (like passers-by) from an imageMedian filter
Detect motion in an imageBackground subtraction + frame differences
Read words or numbers in an imageOptical character recognition (OCR)

How to generate Django secret keys

You accidentally committed your secret key to version control, or the server got compromised, or you lost control of the secret key for some reason. Using an online service to generate one is a bad idea - nobody else should ever have your secret key. How do you generate a new secret key? The otherwise-excellent Django docs are silent on this. Here’s how Django generates one when you run startproject:

>>> import random
>>> ''.join(random.SystemRandom().choice('abcdefghijklmnopqrstuvwxyz0123456789!@#$%^&*(-_=+)') for i in range(50))

Generate a new one and you’re good to go.

retread.py - detect duplicate frames in Video, TV, or Film

The Force Awakens has a lot of shots that are callbacks to A New Hope - they evoke very particular scenes and shots, if not outright identical. That got me thinking about recycling shots in films. How could that be quantified?

A common task in image processing is finding sets of similar images. Often you have a picture, and you want to search for identical or near-identical copies from some large set. A few ways to do this are described here - pHash and dHash in particular are quite effective.

We can use this to our advantage. Given a video clip, we should be able to go frame-by-frame and hash each image. We can then count the number of frames that hash to the same value, telling us how many occurred earlier. I wrote up a python tool to do this: retread.py.

Retread measures how much any given frame is reused in a video clip. It can be a TV show, online video, any video file you can get your hands on. It spits out nice JSON, so you can plot it nicely via d3.

Now, let’s see what some movies look like. First up is Mad Max: Fury Road, a 2015 action blockbuster with many fast cuts and short shots overall.

Often the most common frame ends up being black. This is usually credits, maybe a handful of frames from the beginning of the movie, and any fade-to-blacks in the middle. This shows up as big, solid bars near the beginning and end. The interesting stuff is in the middle. Here we see Max doesn’t have any monster repetitive sections: mad max duplicate frames

On the other hand, Memento, a film with a nonlinear story, has heavier and more spikes. The film is presented in sort of an outside-in fashion - the end comes first, then the beginning, then back to the end, working toward the middle the whole time: memento duplicate frames

Inception somewhat famously wraps multiple stories within one another, with the narrative jumping between all 3 in parallel. Christopher Nolan directed both Memento and Inception. He seems to be a fan of cutting across time and space. This results in a moderately busy graph: inception duplicate frames

Paprika, one of the inspirations for Inception, has even more. Flashbacks occur throughout the film, and it shows. The chart is almost totally full: paprika duplicate frames

See retread here to analyze your own films, TV shows, or clips.

Page 1 of 6