Join me if you are interested in learning how to use Spacebrew to connect interactive stuff. Space is limited so make sure to sign-up for the event on meet-up as soon as possible. The workshop will take place on the 6th floor, in a room with beautiful views of central park. You will also get free access to the entire museum.
Here is a quick overview of the day’s activities:
We will begin with a 30-minute overview of Spacebrew, which will cover why it was created and how it works, followed by a few live demos.
After the workshop we will hang out for a few hours to help people who are interested in learning about more advanced uses of Spacebrew, and who want to integrate Spacebrew into their personal projects.
You don’t need to have any previous experience with Processing or Arduino to take part in the workshop. The workshop will be a little challenging for you, if you’ve never written any code before. However, if you are up for the challenge we’ll help you through it.
About MAD: The Museum of Art and Design explores the blur zone between art, design, and craft today. Accredited by the American Association of Museums since 1991, MAD focuses on contemporary creativity and the ways in which artists and designers from around the world transform materials through processes ranging from the artisanal to the digital.
On April 27th I’ll be leading a Spacebrew workshop at the #ArtsTech Unconference in NYC. This session will be a streamlined version of the workshop that I’ve lead at our monthly meet-ups. There will be a lot of other interesting presenters, workshops and performances at this event, so I hope to see you there.
Here is a brief description of the workshop that I’ll be leading:
Spacebrew is an open, dynamically re-routable software toolkit for choreographing interactive spaces. Or, in other words, a simple way to connect interactive things to one another. In this hands-on workshop you will use Arduino, Processing and Spacebrew to dynamically connect a light sensor to various apps and objects. Bring your computer and we’ll bring a handful or Arduino boards and sensors that you can share with other participants. No previous experience with Arduino and Processing are required.
Another short post about our new drawing machine. Today we created our second vector image drawing. This drawing is much more complex than our first one. The good news is that the calibration is much better now. The bad news is that complex drawings really don’t look so good without the servo motor that lifts the pen as it travels over whitespace; that is why it looks like we drew a bunch of cute characters only slash them apart. Alas, we will address this issue in the next day or two, and we’ll start playing around with multi-colored drawings.
I want to give a shout out to Thomas Boucherie from 1001head, the designer who created the cute kawaii characters that we used for this drawing. Below is the original image that we used to create the drawing. Check out his website at 1001head.com for more of his stuff.
I’ll keep this post very short. Just want to share a time lapse video of the first drawing that we created with the LAB’s new drawing robot. We still have to work on the calibration of the machine, and get it working to print rasterized images. More updates to come.
On Monday I finally received the Polargraph Drawing Robot Vitamin Kit for the LAB. There was a lot of excitement since Josh, James, Adi, Meghna, and I had been waiting, impatiently, for over a month to receive this device from Sandy Noble, the creator of the amazing Polargraph project. I am very grateful to Sandy for keeping this project alive and making this kit.
Earlier today we finished assembling the physical and electronic components of the drawing robots. All-in-all this process took about 15 hours – this is a cumulative figure taking into account all of the time that James, Josh, Adi and I devoted to this project to date. A lot of this time was spent trying to resolve small issues that arose because the online tutorials all feature older versions of the Polargraph SD Drawing Robot.
The best place to start is the tutorial on instructables. It features really good overview of how to set-up the drawing surface and how to use the software. The software is more complex than you might expect since it has a lot of cool features and configuration options.
There are a few things about the physical assembly of the laser cut components and the workings of the Polarshield that are not covered in this tutorials. I jotted down some tips for anyone who is planning to assemble their own kit (and for myself, since I got another kit for living room that will be arriving in the next week or two).
Assembling the Motor Bracket
The bracket design has been updated so that they can be mounted upside down and to help with cable management. This blog post features a several additional pictures of an assembled bracket that will help you understand how all the pieces fit together (picture and blog post from Sandy Noble). I know it is not rocket science, but it is always good to make sure you know what you are doing before you start glueing parts together.
Getting to Know the Polarshield Board
The Polarshield board provides the interface between the Arduino Mega and the motors, SD card reader, touchscreen and optional Xbee. This blog post provides an overview of the features of the Polarshield. Below is an annotated picture of the Polarshield that is taken from this blog post (both from Sandy Noble). For the most part this board is plug and play, with one very important exception that I will discuss next.
Connecting the Polarshield Board
When you mount the board to an Arduino the leads from motor port A will come into contact with the metal casing of the Arduino’s USB port. Therefore, you need to add a piece of electrical tape to the underside of the board to isolate the leads from motor port A from the metal casing of the Arduino’s USB port.
Uploading the Firmware (using a Mac)
Quick correction: I previously had stated that you could not upload the Arduino Mega firmware using the Arduino IDE. I was wrong. This means that you can either upload the firmware using the standard approach, via the IDE, or by using the hex file as described below.
Here is how to upload the firmware to your Arduino using the hex file provided by Sandy:
Make sure that you have Arduino installed because you will need to use a few files that are saved inside of your Arduino application package. To explore your Arduino application package just control-click the app icon and select “Show Package Contents” from the drop-down menu.
Copy the avrdude.conf file from your Arduino application directory to the ‘/usr/local/etc’ directory on your Mac. This file is located inside your Arduino app in’Contents/Resources/Java/hardware/tools/avr/etc/’. This step is not required for everyone but it was necessary on my computer, so I advise that you do it just in case.
Open the Terminal application and run the command outlined below, replacing path_to_file, file_name, and port_name with the appropriate paths, names and ports for your computer. As an example, I’ve included the actual command that I ran on my computer.
If you are using a PC just follow Sandy’s direction on how to upload the firmware to your Mega.
A few months ago a colleague at the LAB asked for my help to develop a simple prototype with a custom-made optical rotary encoder. I was excited to get this opportunity to play around with optical rotary encoders because, even though I had worked with these types of switches in the past, I still did not fully understand how they worked. Helping to build this prototype was a perfect opportunity to bridge my knowledge gap in this area.
Rotary encoders are awesome. Two of my favorite things about encoders is that they provide unlimited rotation and they are surprisingly easy to make. Did I mention that rotary encoders are awesome? Ok, I definitely drank the rotary encoder kool-aid. This post is my attempt to get you hooked on this kool-aid.
With that in mind, I have put together this overview that explains how rotary encoders work. In the coming weeks I will also post a tutorial on designing a custom optical rotary encoder along with code examples for Arduinos. Before I dive, I want to acknowledge the most useful sources of information that I found on the web about rotary encoders. Most the content on this post is distilled from these websites.
Optical Rotary Encoders: An Overview
An optical rotary encoder is a type of rotary switch whose angular position or motion can be detected by optical sensors. These switches provide unlimited rotation, making them unique and ideal for many different applications. There are two main types of encoders: relative (incremental) and absolute. Here is a brief overview of each:
Relative encoders provide information about the motion of the shaft that can be used to determine direction and speed. These are the most commonly used type of encoders.
Absolute encoders provide feedback regarding the current angular position of the shaft, and are sometimes called angle transducers. The physical position of absolute encoders is always available to the system.
How They Work
Optical rotary encoders feature one or more light sources separated from one or more photo detectors by a surface that modulates light. The photo detectors are configured so that when they are read simultaneously (or at least nearly so), the pattern resulting from their individuals states is used to determine the encoders overall state (or position).
For the sake of clarity, here is a step-by-step description of how encoders work: first, light is emitted by the encoder’s light sources. The light passes through slots in a sensing wheel, or is reflected by light and dark areas on a printed wheel. The resulting light patterns that are captured by the photo detectors are used by the firmware to determine the encoders position or motion.
The type and resolution of an encoder are determined by the design of the encoder wheel, and the number and placement of the photo sources and detectors.
Relative Encoders feature two or more photo sources and detectors, coupled with a single-track wheel or two-track quadrature wheel. The configuration of the photo detectors differs depending on the number of tracks on the coded wheel. The photo sensors are offset on one-track wheels, while they are aligned in two-track quadrature wheels. The placement of the sensors is crucial to enabling the rotary encoder to determine the direction of the rotation.
These types of encoders support higher resolution with a smaller number of photo detectors than absolute encoders. Just look at the middle and right coded wheels above to see how it is possible to increase the resolution of relative encoders without changing the number of sensors.
Absolute Encoders feature four or more photo sources and detectors, coupled with an appropriately-designed coded wheel. The resolution on these types of encoders is tightly linked to the number of photo sensors they feature. Each photo detector can hold the equivalent of one bit of data. Therefore, an absolute encoder with four photo detectors is able to support a resolution of 16 different states or positions.
There are two different encoding patterns used on the coded wheels in absolute rotary encoders: binary and gray encoding. Gray encoding is a binary system where adjacent states/positions differ only by one track. This is the preferred encoding scheme because it is less prone to errors.
To parse data from rotary encoders the firmware needs to be able to perform edge detection. Edge detection refers to processes designed to detect sharp changes in state.
In the context of rotary encoders the “edges” refer to the moments when the state of an encoder pin changes from high to low (falling), and vice versa (rising). The detection of an “edge” is then used as a trigger to read the state of all the photo sensors to determine the current motion or position of the encoder.
The “edges” that serve as triggers will vary depending on the type and resolution of the encoder. For example, on a relative encoder we often use only the rising edge of a single track as a trigger. On an absolute encoder the edges are a bit less important since the encoder’s position is always available.
Another important question to consider is whether to use an interrupt- or polling-based approach for edge detection.
Interrupt-based edge detection is the most effective approach, especially for relative encoders. Interrupts enable a software function to be called in response to changes in the state of a physical pin. The Arduino supports both falling and rising interrupts.
Polling-based edge detection relies on continuous sampling and processing of input from all photo sensors. This approach is more computationally-intensive and less responsive, though it is often a satisfactory solution for absolute encoders.
This past week I decided to get into the holiday season and buy myself a Polargraph Drawing Machine Vitamin Kit. The 4- to 6-week wait for delivery is making me feel like a kid waiting for Christmas, though my Christmas has been reschedule to mid- to late- january.
After the idea of a drawing machine came up in one of my projects at the LAB two months ago, I’ve been keeping my eyes open for any information about how to create a drawing machine. Somehow, I came across the Polargraph project in my research but I did not find the project’s code repo or store.
After a few hours of hesitation I decided to pull the trigger. Then by the next morning I had convinced James to order a Polargraph Vitamin Kit for the LAB. I’ll admin it was an easy sell since both James and Josh are as excited as I am about playing around with drawing machines.
The Polargraph project is by far the best documented and supported drawing machine project that I have been able to find. I want to thank Sandy Noble for keeping this project alive and moving forward.
I am excited about joining the community of Polargraph owners and I hope that I, and my colleagues at the LAB, will be able to contribute to this project over the coming years.
I’m a big fan of switches, and digital inputs of all types. A little over a year ago I published a blog post about the Switches Library for Arduino that I created. This library provides classes for managing the state of switches, buttons, analog switches, and rotary encoders. I developed this library for a project where I neded to connect a large number of switches to multiple different Arduinos.
I never got a chance to fully document how this library works, until now. Earlier today I published an updated version of the library that includes example sketches, a readme file with detailed description of how each switch class works, and source code that is well documented.
The Switches library was designed to enable you to handle input from different types of physical inputs using a consistent code/design pattern. This library is actually a collection of libraries, where each one handles a different type of switch. The switch-specific libraries are refered to as implementation classes in the documentation. These provide helpful features, such as debouncing capabilities for digital switches and smoothing capabilities for analog switches.
Below is an list of the example sketches that have been added to the library. You can access these examples by selecting File -> Examples -> Switches from the top menu bar on the Arduino IDE app.
analog_switch demonstrates how the analog switch class can be used to set-up an analog switch. The sketch sends a serial message with the switch’s state whenever its state changes.
digital_switch shows how the digital switch class handles a digital switches, when in momentary mode. The example sketch sends a serial message with the switch’s state whenever its state changes.
multi_state_rgb_button illustrates how to create a sketch that manages a multi-state button with an rbg led that reflects the button’s current state. The example sketch updates the led color and sends a serial message with the switch’s current state whenever the button’s state changes.
rotary_encoder shows how to hook-up a rotary encoder using the switches encoder library. The sketch sends a serial message with the encoder’s current state whenever its state changes.
I had a fun time testing out the digital switch, multi-state rgb button, and analog switch examples on the ProtoSnap Pro Mini kit from Sparkfun. I’ve had this little prototyping kit since I got it from Nathan Seidle at the MakerFaire in San Mateo, so I have to give a shout out to Nathan and Sparkfun.
Please note that I was not able to test the rotary encoder example yet. I will update the code as necessary when I test the code in the coming week.
About the Header Image for This Post I created this image using Creative Commons licensed photos from flickr. To credit the owners of these photos, I put together an annotated version of this image that features links to the original source images.
We used the tutorial published by Jim Bloom on bildr as our starting point. This tutorial walks through the process of hooking-up the breakout board to an Arduino, and features a sketch that configures the controller to sense touch events from all 12 electrodes.
Our next step, was to figure out how proximity detection works on the MPR121 and what needs to be done to activate and configure this capability. Here is a brief description of how detection sensing works, taken from one of the application notes on Freescale’s product site:
MPR121 has a unique feature that all the electrode inputs can be internally connected together so that all the surface touch sensing area on the inputs are “summed” together to act as a single large electrode pad. This can effectively increase the total area of the sensing conductor for non-contact near proximity detection for hand approaching.
This means that proximity detection works best when all 12 touch sensing electrodes are connected, and that the MPR121 can sense proximity and touch simultaneously.
After one afternoon of reading through the MPR121 documentation site, and a day of going through the Proximity Detection and Capacitance Sensing Settings application notes and testing different register settings with our prototype, we figured out how to activate the proximity sensing capabilities. In the process, we learned to configure over 15 registers. For the most part, we used the suggested settings from the application notes for these registers.
We recently acquired a lightning sensor at the LAB. This awesome little sensor reportedly can detect lightning within a 40 kilometer range. After a night of thunderstorms we finally found the time the hook-up this sensor. Unfortunately, the storms were gone today so we didn’t get a chance to confirm that the sensor can indeed detect lightning.
Luckily, we recenly developed a data logging app for Spacebrew, which we’ve dubbed spacelog. This simple node-based app logs all the data it receives from a boolean, a range, and a string subscription channel. It saves the data as json-formatted strings in a text file in a local tmp directory.
To test the lightning sensor, we hooked it up to Spacebrew and set-up a data route to the data logger. We are leaving these apps running over the weekend, during which time we hope to capture some lightning activity from somewhere near the LAB.
More About the Data Logger
The data logging app can be easily started and configured via the command line. To start the app, just navigate to its base directory and run the command below. I’ll admit I’m oversimplifying things a little bit; you will need to read spacelog’s readme file to make sure you have all the required software, such as node.js along with appropriate modules.
node spacelog.js [optional arguments]
The three optional configuration arguments are:
server=server_name where server_name is the hostname of the spacebrew server.
name=app_name where app_name is the app name that will be registered with spacebrew.
file=file_name where file_name is the name of the file that where data will be saved.
Currently this app is not able to track the source application for the data that it receives. Therefore, we recommend running multiple instances of this app if you need to be able to parse data from different sources. When running multiple instances make sure to configure each one with a unique spacebrew app name, and data file name.
The data logging app is an open source app that uses jog.