The chameleon box

Late last year, we had an in house brief at 9 Yards to come up with ideas for items, objects or installations to be displayed in the alcoves of the feature wall in our office.

My response to the brief was to turn one of the alcoves into a colour sensing light box – place an object within the box, and the lighting in the box would change to match the colour of that object. I worked with various computer vision techniques last year and thought that this would be a good challenge to combine and experiment with some of the techniques that I’ve picked up including background subtraction and colour averaging.

My other aim with the project was to experiment a bit more with the Raspberry Pi. I’ve had one for a while now and have found that you can do some really cool stuff with it, especially when combined with other hardware like the camera module and an Arduino. I also wanted a good excuse to dabble with the Python programming language as I hadn’t had much exposure to it until getting a Raspberry Pi.

There are a few key elements that make up the functionality of the box, the image capture, made possible using the Raspberry Pi camera module, the background subtraction and colour averaging processes made available through the Python CV2 library, and the lighting, which takes the form of a NeoPixel addressable RGB LED ring attached to an Arduino.

First off, the camera module. The Raspberry Pi camera module is a really nice addition to the device which enables stills or video to be captured, there’s even a Pi NoIR camera module which allows for infrared light to be captured, making things like night vision possible, when combined with a infrared light source. The module is really easy to install, and there’s a Python library that you can use to operate the different features. For the purposes of the colour averaging functionality, I needed to capture a series of photos, the first, would be a reference photo – taken of the empty box, illuminated by a neutral white light. This image is then stored and combined with the captured image, which features the object within the box, to create a difference mask which is then used to extract the background of the original captured image. Next is the colour averaging process, the captured image, minus the background is processed using the CV2 library to determine the average colour of the masked pixels. I also added the ability for the reference image to be recaptured to compensate for changes in ambient light conditions. The opening of the box is relatively close to a window and throughout the course of the day the brightness of the light changes as the sun passes over the office. Additionally, as the sunlight fades, the temperature of the lighting changes from a blue tinted outdoor light, to the more yellow tinted incandescent light of the office. See images of the different stages of colour averaging process below.

Capture Process

Image – from top left to bottom right: reference image, captured scene, difference mask, background subtraction. After establishing the average colour of the scene, it’s time to send a hex value to the light source in order to display the matched colour.

So, the addressable RGB LED ring. I played with addressable RGB LED’s when trying out the Adalight project a couple of years ago – it’s a really fun project to do and I had it running for a while, but took it apart when we changed the configuration of our TV setup. I definitely want to get it going again – possibly another project for the Raspberry Pi at some point. In this instance, the LEDs came in the form of the SparkFun NeoPixel ring – I went for the 60 pixel ring for maximum effect, they come in quarter circles, so you have to solder them together, they’re bright and the accompanying library is really easy to use. Unfortunately, running NeoPixels directly from a Raspberry Pi, whilst possible, isn’t entirely straight forward so I decided to use an Arduino as the controller for the LED’s which meant sussing out serial communication between the Pi and Ardunio via the USB. Again, this is something that I’ve played around with in the past but never really got the hang of until now, and when I did, it made complete sense. The nature of serial communication is that when a message is broadcast, you have a constant stream of bits, which tick through the pipeline, one at a time. Provided that both ends of the pipeline know which bits define the start and end of a message, everything else is just a case of listening for those characters to determine when you should start gathering a message, and when you should stop. This makes it possible to broadcast long messages by concatenating the incoming bits together, beginning and ending the message when the headers and footers are detected. This serial communication allowed me to send the RGB Hex values of the detected colour to the Arduino which would, in turn, set the colour of the RGB LEDs.

Lightbox Structure

As I was developing the functionality for the box mostly off site, I built all of the electronics into a foam board box which could then later be slid into the alcove. This made the installation in the office easy, and just a matter of connecting up power and an Ethernet connection to the Raspberry Pi, for the purposes of remote control over the network via VNC.

The end result works fairly well, although there are a couple of hacks that I used to improve the output a little. In reality, I need to work a bit more on the accuracy of the background subtraction and colour detection. As can be seen above, the masked image often appears quite dark, resulting in a highly saturated colour match, which led me to a bit of a hack. Dissatisfied by the lack of definition in the colour of the lighting, I decided to process each channel (Red, Green and Blue) as normal, but then remove the lowest value, in order to enhance the remaining two values. I know that this is far from ideal and actually causes some colours to be very inaccurate – particularly those which fall at the midway points of the 0-255 scale, but I think that the overall effect is a more defined colour match. For example, if the colour averaging were to derive a value of R: 98, G: 100, B: 105, from a scene, the variance between each colour is extremely narrow, and in this case, every value is of near equal importance in the true representation of the matched colour. Consequently, in it’s current form, the box is good at matching primary colours, but struggles outside of that scale. This is something that I think could be improved with a little more time put into creating a better lighting set up – in terms of both the actual lighting of the box, the brightness settings within the camera module, and a more accurate background subtraction.

The other aspect that I didn’t quite get round to finishing at the time of writing, was the idea of making the box fully automated. My aim with this would be to have the box, operational during specific times of the day, i.e. During office hours, and I would also like to be able to have the box identify a change in scene, in order that it could detect the removal of an old object, and the placement of a new one so that it automatically analyses the new item. In real terms, I think this would be a case of capturing a second reference image, and periodically capturing new images to compare against the second reference image to determine whether a new object was present. I really enjoyed developing this project and aside from its unfamiliar and seemingly open ended syntax, Python was enjoyable to use and offers a lot of possibilities.

Leave a comment