Posts from July 2010

July 2010
Sun
Mon
Tue
Wed
Thu
Fri
Sat
1
2
3
4
5
6
7
8
9
10
11
12
13
14 📝
15
16 📝
17
18
19
20
21
22 📝
23
24
25 📝
26
27
28
29
30
31

Automatic MakerBot Camera Pt. 4 - Updating ReplicatorG

In the previous posts in this series, I hacked up a Canon camera to take pictures with an electronic trigger, built a cable to connect the camera to my MakerBot, and hacked the MakerBot’s firmware to enable it to trigger the camera in response to commands from the controlling computer.

The final step was to hack the desktop software that controls the MakerBot - ReplicatorG.

What is ReplicatorG?

From the ReplicatorG website:

[ReplicatorG] is the software that will drive your CupCake CNC, RepRap machine, or generic CNC machine. You can give it a GCode or STL file to process, and it takes it from there. Its cross platform, easily installed, and is based on the familiar Arduino / Processing environments.

For my purposes, ReplicatorG provides two things. First, RepG is a user interface for controlling the MakerBot hardware:

Second, RepG reads G-code files describing how to build an object, and transmits them to the MakerBot over the USB:

Of course, ReplicatorG is open source, and the code is available on GitHub! So, it was simple to clone their repository and start hacking on it myself.

Camera Control via ReplicatorG

While it was relatively simple to update the extruder controller firmware to make it camera-aware, ReplicatorG is a bit more complicated. My first goal was to hack a new “Camera” checkbox into the control panel. Whenever the box was checked, the camera would take pictures. Whenever the box was unchecked, the camera would be idle.

You can find the code required for these changes in this commit on GitHub, but I will try to briefly break them down here:

  • Define a new machine. In the machines.xml.dist file, I defined an experimental MakerBot configuration named "EXPERIMENTAL - Cupcake CNC w/HBP and remote camera". It is essentially a copy of the typical MakerBot configuration with a heated build platform, but in the <tool> definition, I also added a camera="true" attribute.
  • Update the tool model. In ToolModel.java, I added code to represent whether the tool has an attached camera, whether the camera is activated, and how to parse the camera attribute out of machines.xml.
  • Update the machine driver model. In Driver.java, DriverBaseImplementation.java, and Sanguino3GDriver.java, I added the definitions and implementations to triggerCamera() and stopTriggeringCamera(). This is the code that actually sends the TOGGLE_CAMERA serial command to the extruder controller, which I also defined in ToolCommandCode.java.
  • Update the control panel interface. In ExtruderPanel.java, I added the code to draw a new label and checkbox named "Camera", if the machine is configured for a camera, and to respond to check/uncheck events by calling triggerCamera() or stopTriggeringCamera().

Compiling and Running the new ReplicatorG

Compiling ReplicatorG is pretty simple, so long as you have a reasonable JDK environment and have Ant on your path. There are basically two steps:

  1. Copy machines.xml.dist to machines.xml`.
  2. Run the proper dist-linux.sh, dist-mac.sh, or dist-windows.sh.

ReplicatorG will be compiled and packaged up into the dist/ directory in two forms: an installable package for the chosen platform, and an unpacked version that you can run directly.

Opening up my modified version of ReplicatorG, I selected the “EXPERIMENTAL - Cupcake CNC w/HBP and remote camera” profile from the Machine -> Driver menu, opened up the control panel, and was happy to see this:

After hooking up my camera to the extruder controller’s D9 port, and starting the Remote Button script on the camera, I was able to take pictures by quickly toggling the camera checkbox on and off. I could also leave the checkbox activated to make the camera take pictures continuously.

Automatic Triggering with G-codes

Being able to trigger the camera by hand is all well and good, but my goal was to take pictures automatically at the end of every layer. To do this, I needed to be able to embed camera trigger commands in the G-code for building each individual object.

Looking at the ReplicatorG G-code docs, and the (machine-specific) M-code docs, I chose two codes for working with the camera:

  • M150 - Trigger Camera
  • M151 - Stop Triggering Camera

I may have to change these in the future, as the main ReplicatorG development team claim G- and M-codes for other features, but these work for now.

Modifying ReplicatorG to accept these M-codes (GitHub commit here) was straightforward: update GCodeParser.java to recognize the codes, and call the appropriate triggerCamera() and stopTriggeringCamera() methods.

I could now construct a G-code file which, when “built” in ReplicatorG, would take a picture on demand:

M150     (trigger the camera)
G4 P700  (wait 0.7 seconds for the camera to activate)
M151     (stop triggering)
G4 P1300 (wait 1.3 seconds for the camera to finish)

Finally, it was time to edit up the G-code for the models I want to photograph.

Typically, G-code is generated by taking a 3D object in STL format and running it through the Skeinforge tool. Skeinforge is a set of Python scripts, which means it is not too difficult to insert your own code.

For now, however, I decided to make a simple hack using a Perl script I wrote called add_camera_events.pl. It works by looking for (</layer>) comments, which signal the end of a layer of printing, and inserts lines to:

  1. Move to a standard pose (X=0, Y=-45),
  2. Trigger the camera and wait for it to finish, and
  3. Move back to the original position

You can find the source for this script in the add_camera_events.pl gist. The source for all of my changes to ReplicatorG are on GitHub, in the “schmarty-camera” branch of my fork of ReplicatorG.

And with that, the computer aspect of this system was finally done!

Wrap Up

Phew! So far I’ve hacked a camera, wired it to the MakerBot, updated the MakerBot firmware to trigger it, updated ReplicatorG to trigger it, and written a script to update G-code files with camera triggers at the end of each layer.

So… does it work? You bet! Stay tuned for more examples and a breakdown video of this whole project in the final post in this series!

Space filling with 3D objects using Processing

While I know I should be finishing my MakerBot time-lapse camera series, I took some time for another project to play with some Processing. The above image was rendered in Processing, in real time in just couple of minutes!

Basically, I wanted to take a simple shape, defined by an SVG path, and fill it with images of 3D objects loaded from STL files. Specifically, many wonderful MakerBot-printable objects from Thingiverse!

After some Googling around, I found out that this problem is basically a space-filling problem, similar to an excellent Processing sketch named Scattered Letters by Algirdas Rascius, but with a twist.

The basic algorithm is:

  • Load an SVG and render it to an off-screen buffer
  • Set curr_size, the size that STLs should be rendered, to max_size
  • Choose a random STL model, give it a random orientation, and render it at the current size to an off-screen buffer
  • Try several times to place this model by giving it a random x,y position and checking it for a good fit:
    • Each non-background pixel of the model's off-screen image should fit within the non-background pixels of the SVG's off-screen image.
    • Each non-background pixel of the model's off-screen image should NOT overlap with any non-background pixel of the main display.
  • If a fitting position is found, render the model to the display.
  • Otherwise, shrink curr_size by a step and choose a new model.
  • If we drop below min_size, we should stop.

You can find the code for my sketch, which I call ThingiverseCollage, on GitHub. To make it work, you’ll need to follow the installation instructions in the README to install my (very slightly) modified version of the unlekkerLib for STL loading and rendering. I modified it to allow rendering to a PGraphics object, since it originally only allowed rendering to the main PApplet.

A note on STL files: unlekkerLib only loads STL files in the binary format. It chokes dramatically on ASCII STL files, such as those exported from OpenSCAD. I was able to use Zaggo’s excellent Pleasant3D to load ASCII STLs and re-save them, which converts them to binary STLs. As a bonus, Pleasant3D also allows you to orient objects in a way that will make them look most interesting when they are rendered down to 2D in the final image.

An example M.svg, as well as several objects from Thingiverse are included with the code to get started. To use your own SVGs, I have had good luck using Inkscape to draw or import shapes, and save them as the native “Inkscape SVG” or “Plain SVG” formats. Some files might require hand-tweaking; for example, if the width and height header values are something like "100%" instead of a pixel value.

There is also some simple configuration in the sketch to allow the export of PDF files. This is nice because the resulting PDF has full vector data, making it easily rescaled to any size you wish. Unfortunately, the current PDF renderer for Processing renders each triangle of each STL model as a separate path, generating very complicated vector output, which tends to bring Inkscape to its knees. I have had some luck with importing those files, rastering them out to PNG at a high resolution (e.g. 600 dpi), and using Inkscape’s “Trace Bitmap” functionality to re-vectorize them, though this requires some cleanup by hand.

Anyway, this has been a fun little diversion for me for the last couple of days. I hope that you folks find it useful! Post your awesome pictures in the comments, here!

Automatic MakerBot Camera Pt. 3 – Updating the MakerBot Firmware

In the previous post in this series, I figured out how to wire up my hacked Canon SD300 with CHDK. I chose to use the “D9” port on the Extruder controller board, thinking that should make the software as simple as setting pin 9 to “HIGH” for a brief time whenever I wanted to trigger the camera.

The next step was to update the software on the extruder controller so that it could activate (and deactivate) the camera, in response to commands from the motherboard.

An aside on MakerBot communications

The MakerBot electronics ecosystem is comprised of 3 parts: your computer, the MakerBot’s motherboard, and the extruder controller board. Your computer talks to the motherboard via a USB<->TTL interface (such as this FTDI cable from SparkFun). In turn, the motherboard communicates with the extruder using another serial protocol, RS-485, over an ethernet cable. Finally, the extruder triggers the camera via the custom cable I made in the previous post.

The software for all three components is available on the indomitable GitHub. The software for your computer is called ReplicatorG, and the source can be found in the MakerBot ReplicatorG GitHub repository. I’ll talk more about ReplicatorG in the next post in this series. For now, we want to focus on the MakerBot G3Firmware GitHub repository, which contains the code for the motherboard (in the SanguinoMaster subdirectory), and for the extruder (in the ArduinoSlaveExtruder directory).

Browsing through the code, we see that these components use their serial interfaces to send packets, where each command is represented by a number. The commands for the motherboard can be found in the SanguinoMaster/Commands.h, and those for the extruder can be found in ArduinoSlaveExtruder/PacketProcessor.cpp.

To send a message to the extruder - in this case, to activate or deactivate the camera - we must create a packet for the motherboard. The HOST_CMD_TOOL_QUERY code allows us to send the motherboard a packet which it will then pass along to the extruder controller.

That’s great, because it means the motherboard part of this software hack is done!

In fact, we’ve already hacked the camera, as well, so we’re halfway there!

Hacking a camera into the extruder controller

Since the motherboard already does everything we need (passes along packets from the computer to the extruder controller), we only need to update the ArduinoSlaveExtruder code.

To get this to work, I ended up changing the following files: ArduinoSlaveExtruder/Configuration.h.dist - added in configuration options for enabling the camera and setting the pin on which to activate it. ArduinoSlaveExtruder/Extruder.h - added function definitions for turning on/off the camera. ArduinoSlaveExtruder/Extruder.cpp - actually implemented turning on/off the camera. ArduinoSlaveExtruder/PacketProcessor.cpp - implemented the serial command to toggle camera.

Building and uploading

If you followed the 4 links above, you’ll notice that they go to my own G3Firmware GitHub repository. You can download it yourself to play along by cloning the repository and checking out the ECv2.3rc0-camera branch.

To build the firmware and upload it to the extruder controller, we need some common development tools (make, in this case), and the Arduino development environment. With those things installed, we can compile everything by setting the ARDUINO_HOME environment variable to the path to our Arduino install’s java directory (e.g. on OS X this would be /Applications/Arduino.app/Contents/Resources/Java/), and simply run make.

Once the firmware has been compiled, we can upload it to the extruder controller by using the USB<->TTL cable that usually connects the motherboard to our computer. Plug the cable into the extruder controller, and run the make upload command. You’ll need to make sure that ARDUINO_HOME is set, and you will probably need to alter the Makefile to specify the correct serial port, and maybe to update the call to avrdude to include the path to the Arduino avrdude config file. You can see an example of that in this commit.

Once the firmware is uploaded to the extruder controller, the MakerBot is all set to take pictures!

… Of course, we still have no way to tell the MakerBot to take a picture, so stay tuned for that information in the next update:

Automatic MakerBot Time-Lapse Photography!

It works!

Using a Canon SD300 with CHDK, and some firmware hacks, Makerbot #131 has learned how to make time-lapse videos of all of its prints!

More details (and a how-to!) coming soon! And thanks to Thingiverse user Starno for the bottle opener model in the video.