Revisiting Processing

Over the last month or so, I’ve been busy experimenting the use of Processing, which is an open source programming language geared towards artists and creatives wanting to create visual and interactive work. I first looked looked into Processing during my MA in 2008 and at the time, being far less experienced with programming, I found it a bit tricky to get my head around, as such, I didn’t pursue it any further. This time round however, after some experimentation and coming fresh out of the Reasons to be Creative festival with tons of inspiration, I had a real urge to revisit the Processing language.

Having spent a fair bit of time using the Arduino platform, I found Processing a lot easier to get into as they both have very similar development environments. One of the main inspirations to get into Processing this time round was it’s ease of integration with Arduino. Processing can be used to communicate with an Arduino board via the serial port and similarly, the feedback provided by sensors or buttons wired onto the Arduino board can also be interpreted within Processing to produce visual or programatic responses.

One of the things that immediately struck me about Processing is that it bears quite a few similarities to Flash in terms of it’s core uses and the kind of libraries that have been written for it. What I find most exciting though, is it’s potential to interface with a range of different hardware devices such as the Kinect/ASUS Xtion, Webcams and Arduino, and undoubtedly a whole load more devices that I haven’t considered. In addition, there are some really great libraries to help with tasks like manipulating sound, video, midi/open sound control, and the use of web based API’s.

It’s fast becoming one of my favourite platforms for prototyping and experimenting with a variety of tools and libraries.

Often in coding, I find that one of the best ways to learn is to come up with a small project which isn’t too elaborate and helps to solve a problem or two, this usually helps me get thinking how various components can be pieced together to develop a solution and often provides a good idea of what can be done with a new piece of technology.

So, here’s the problem: A neighbour of ours recently had a break-in, during which a few electrical items were stolen. The impression that I get from the local police/neighbourhood watch e-mail updates, is that they suspect that quite often, break-ins take place following a quick check through the windows of a house (or several) a few days prior to a break-in, in order to decide which houses are worth breaking into, based on what can be easily and quickly stolen. As such, my aim was to come up with a really cheap and effective way of creating a basic home security app which could potentially catch the criminals in the act of ‘window shopping’, possibly to try to preempt possible break ins.

My main aim with this app was to create something really simple, effective and inexpensive – and something which would make use of resources that most people with a computer might already have.

Existing home security kits feature cameras, sometimes infrared cameras, which can be set to record when motion is detected, some can even text or e-mail you to notify you when movements or activity is detected – these are ok, but they’re also really expensive.

In this instance, my idea was that an app could be left running during the day, with a cheap webcam connected and positioned to look through a window (possibly using a usb extension lead) to keep an eye on any suspicious behaviour occurring outside of the house. The app could then make use of basic motion detection to determine when still photos should be taken to keep track of what is happening.

The final element that I thought would be useful is some sort of notification system to alert the user when any activity was detected. It would be easy enough to build in some sort of tweet functionality but, depending on how it’s done, this may require the user to set up an app with Twitter and enter oAuth details or passwords and all sorts which, if my target is to be a broader audience, would probably make the app less accessible.

With this in mind I thought that a really simple and effective way of solving this problem would be to deploy the app on a dropbox account – it’s perfect for the job as it provides both storage space, and a form notification system. I have my dropbox account linked with both my work and home computers and this allows me to effectively share code, libraries, reference materials, etc, between home and work – the other key thing that it’s great at doing, is notifying you when a file has been added or modified.
By storing and running the app in it’s own folder on dropbox, any time a photo is taken, it will be saved within the app folder. Upon creation of a file, dropbox’s notification system will alert a user to the new file being created, a you get a little pop up message – perfect!

So, here’s the rough app that I created, feel free to steal, try out, re-work, etc. Either download the package for your preferred platform below, Windows or Mac, and run the MotionDetection app file on your machine. Alternatively, copy and paste the code into a fresh Processsing sketch, making sure to import the Control P5 library to take care of the user interface.

Windows

Mac OSX

Here’s my source:


// This example incorporates the excellent Control P5 library which you can find here: http://www.sojamo.de/libraries/controlP5/

// Basic webcam motion capture security thingy //

// Based on Daniel Shiffman's example: http://www.learningprocessing.com/examples/chapter-16/example-16-13/

/* Imports */

// ControlP5
import controlP5.*;

// Video
import processing.video.*;

/* Vars */

// Capture devices
Capture video;
Capture cam;

/* Control P5 Instance and initial slider setting*/
ControlP5 cp5;

// Previous Frame
PImage prevFrame;

// How different must a pixel be to be a "motion" pixel
float threshold = 300;

// Timer variables
int savedTime;
float intervalTime = 5; // 5 Seconds

/* Functions */

// Setup
void setup() {

// Set the app dimensions:
size(320, 520);

// Draw the comparison video
video = new Capture(this, 320, 240, 30);

// Draw the webcam/Normal video view
cam = new Capture(this, 320, 240, 30);

// Create an empty image the same size as the video
prevFrame = createImage(320, 240, RGB);

// Control Panel/Interface stuff
cp5 = new ControlP5(this);

cp5.addSlider("thresholdSlider")
.setWidth(300)
.setPosition(10,180)
.setRange(150,450)
.setValue(threshold);

cp5.addSlider("intervalSlider")
.setWidth(300)
.setPosition(10,220)
.setRange(2,20)
.setValue(intervalTime);

// Start the video sources
video.start();
cam.start();

}

// Draw
void draw() {

// Check video source availability
if (video.available()) {

// Save previous frame for motion detection:
prevFrame.copy(video, 0, 0, 320, 240, 0, 0, 320, 240); // Before we read the new frame, we always save the previous frame for comparison!
prevFrame.updatePixels();

// Read the video data
video.read();

}

// Check webcam source availability
if (cam.available()) {
// read the webcam data
cam.read();
// Display the 'webcam' view so that the capture function can record a real image of the motion taking place, not a difference map
image(cam, 0, 240);
}

// During each loop, refresh the current display's pixel data for comparison with the previous frame
loadPixels();
video.loadPixels();
prevFrame.loadPixels();

// Begin loop to walk through every pixel
for (int x = 0; x < 320; x ++ ) {

for (int y = 0; y < 240; y ++ ) {

int loc = x + y*320;                    // Step 1, what is the 1D pixel location
color current = video.pixels[loc];      // Step 2, what is the current color
color previous = prevFrame.pixels[loc]; // Step 3, what is the previous color

// Step 4, compare colors (previous vs. current)
float r1 = red(current); float g1 = green(current); float b1 = blue(current);
float r2 = red(previous); float g2 = green(previous); float b2 = blue(previous);
float diff = dist(r1,g1,b1,r2,g2,b2);

// Step 5, How different are the colors?
// If the color at that pixel has changed, then there is motion at that pixel.
if (diff > threshold) {
// If motion, display black
pixels[loc] = color(0);
// Trigger the 'captureImage' function upon detection of movement to capture a still and store it
captureImage();
} else {
// If not, display white
pixels[loc] = color(255);
}
}
}

updatePixels();

// Update slider values:
fill(#222222);
text("Threshold: ", 10, 160, 200, 50);
threshold = cp5.getController("thresholdSlider").getValue();

fill(#222222);
text("Interval between shots: ", 10, 200, 200, 50);
intervalTime = (cp5.getController("intervalSlider").getValue()*1000);

// Create a solid panel below the webcam image and display the date and time of the image so that the details can be stored on the saved image
noStroke();
fill(#222222);
rect(0, 480, 320, 40);

fill(#FFFFFF);
text("Date/Time: ", 10, 490, 100, 50);
text(day()+"/"+month()+"/"+year()+" at "+hour()+":"+minute()+":"+second(), 100, 490, 200, 50);

}

void captureImage(){

// Calculate how much time has passed
int passedTime = millis() - savedTime;

// Check passedTime var against intervalTime var to see if intervalTime has passed
if (passedTime > intervalTime) {

// If yes - take a picture

// Create a timestamp to mark the date and time of the image for later reference
String timestamp = day()+"-"+month()+"-"+year()+"-"+hour()+"-"+minute()+"-"+second();

// Apply time stamp to the file name of the saved image to create a unique file name
save("image_"+timestamp+".jpg");

// Save the current time to restart the timer!
savedTime = millis();

} else {

// If no - ignore request

}

}

In developing this, I made use of Daniel Shiffman’s basic motion capture example with the addition of a non-motion capture webcam view so that the app can store meaningful images of anything happening within the shot of the webcam. I’ve also added a few controls to allow adjustments to be made to the threshold, i.e. the sensitivity of the motion detection, and the interval between shots. Each of these values can be adjusted using the sliders. You should be able to see the affect of the adjustments on the light area behind the controls as a rough guide to the sensitivity – if you drop the threshold right down and wave at the camera, you should see the motion being drawn as black areas. I find that a value of around 300-400 is about right for normal daylight – this might need to be lowered in darker environments. With the shot interval, the range can be anything between 2 and 20 seconds – I found that with a lower sensitivity – the app kept taking photos in quick succession – one way of tackling this was to adjust the threshold, an additional and handy adjustment was to change the interval so that not as many shots are taken in succession. Playing around a bit with these value should hopefully let most people find a happy medium.

Obviously this a fairly crude example of what is possible and there are plenty of areas for improvement. But I think it represents a basic concept which can be built on. If this app doesn’t quite do it for you – maybe something like this would be a great alternative? : )

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s