Category Archives: Software

Using AngelScript CSG

Download AngelScript CSG version V2.0-02, with IDE included
Windows 64bit here.
Linux (K)ubuntu 15.10 64bit here.

AngelScript CSG is a work in progress, see the previous articles on this blog. Today's article is providing practical information on installing and using the software.

Windows 64bit

The AngelScript CSG setup package will install the script compiler as_csg.exe and the editor/IDE as_ide.exe.  However, before installing the setup package, please make sure to install the Visual C++ Redistributable Packages for Visual Studio 2013, issued by Microsoft. You will also need a recent version of OpenSCAD installed on your system.

Linux (K)ubuntu 15.10 64bit

The angelscript_csg.tar.gz contains the binaries for AngelScript CSG script compiler as_csg and the editor/IDE as_ide. Extract the contents to a suitable folder ~/angelscript_csg is recommended.  You will also need a recent version of OpenSCAD installed on your system. If you extracted to folder ~/angelscript_csg, add this to bottom of your ~.bashrc file:

 export PATH=$PATH:~/angelscript_csg

export LD_LIBRARY_PATH==LD_LIBRARY_PATH=:~/angelscript_csg

A small taste of the AngelScript language

If you are familiar with C, C++ or similar languages the learning curve will be short and painless. AngelScript is similar in most respects. A relatively complete description of the general language is found here, but as a short primer here are a few key points to know

  • The language is strongly typed, variables must be declared with a type as in C or C++

  • Line comment lines are preceded by double slash //

  • Comment blocks begin with /*  and end with */

  • There are two forms of objects, reference types and value types.

The value types are like the primitives

 int i    = 0;        // an integer value type

doble pi = 3.13159;  // a floating point value type

The reference types uses object handles. Object handles are used to hold references to other objects. When calling methods or accessing properties on a variable that is an object handle you will be accessing the actual object that the handle references, just as if it was an alias.

 double radius = 3;

 sphere@ s   = sphere(radius);       // handle to a sphere

cylinder@ s = cylinder(10,radius);  // handle to a cylinder

cylinder@ s2 = @s;    // 2nd handle to same cylinder

  • Functions are defined as in C/C++, returning value types or reference types

double sum(double a, double b)


        return a+b;


solid@ sum(cylinder@ c, sphere@ s)
    return union3d(c,s);

  • Arrays may be defined for value types or reference types

int[] iarr = {0,1,2,3,4,5};

solid@[] sarr = { cylinder(10,3), sphere(3) };


// build a growing array of spheres with increasing radius

sphere@[] spheres;

for(int i=0; i<10; i++) spheres.push_back(sphere(i));


// report the size of the array to terminal (answer will be 10)

cout << spheres.size() << endl();

The above is just a small taste of the language. If it catches your interest, you may want to look at the the full language description for more details. Remember also that withing the AngelScript CSG IDE you can use Help → View Documentation to find more specifics on how to construct the various CSG objects that are not described in the general AngelScript language description.

Another topic is transformations, but we leave that for another day.

Octave plug-in calling MSVC

This post is about creating plugins to the Windows version of GNU Octave 4.0.0 (Octave) using existing components created with Microsoft Visual Studio 2013 (MSVC). It takes too long to explain why this is sometimes useful, just assume that it is. A typical scenario could be that you have some kind of database accessible from MSVC code and you want to expose the data in Octave.

The figure below illustrates a possible setup. To implement a plugin in Octave, you write a piece of C++ code and compile/link it into a special kind of shared library referred to as an 'oct-file'. Such code can call other components, for example a DLL (Dynamic Link Library) created with MSVC. This way, the oct-file functions as the glue between the Octave application and some other software component.

his sounds simple enough, but in practice there are a couple of things to handle to make it work:

First, Octave and its oct-files are compiled using the MinGW GNU g++ C++ compiler (It is not practical to recompile Octave using MSVC) and GNU g++ code is not binary compatible with MSVC code. Therefore, we cannot statically link the MSVC dll with the oct-file and we cannot pass C++ objects in the calls between them, because name mangling schemes and calling conventions are incompatible between the compilers.

Second, Octave and its oct-files are compiled as 32bit. Even if we find a way around the first problem, it will not work if the MSVC component is compiled as 64bit, it has to be 32bit as Octave.

Now the whole thing sounds a lot more complicated, but the problem description also provides the clues to the solution

  • Use a C interface in the calls from the oct-file (GNU g++) to the MSVC code

  • Load the MSVC dll dynamically instead of linking statically

  • Compile the MSVC code as 32 bit

MSVC C++ code

The first bullet above means we must provide global functions declared as extern ”C” in the MSVC C++ code (linked as DLL, and exported), here is a sample header declaration:

#ifndef MSVC_COMP_H
#define MSVC_COMP_H


  #define MSVC_COMP_PUBLIC __declspec(dllexport)


  #define MSVC_COMP_PUBLIC __declspec(dllimport)



extern "C" {

   // msvc_get_data returns a pointer to internal data, must not be deleted outside

   MSVC_COMP_PUBLIC double* msvc_get_data(const char* file_path, const char* data_id, long* nsamp);



#endif // MSVC_COMP_H

Note that the function takes and returns parameters as if it was an old style C-function, no C++ objects or pointers are allowed, du to the compiler differences. The extern ”C” statement removes any name mangling in the compiled name, it makes it possible to look up from other code (see below).

Notice also that in this case, the function returns a pointer to some numerical data. Such data may be dynamically allocated by the MSVC compiler, and cannot be deleted in the GNU code (in this case the pointer ponts to a global variable internally in the MSVC code and it will be cleaned up in the next call).

The above is just the header file declaration, the implementation of the msvc_get_data function can use all C++ constructs, objects, pointers etc.  The msvc_get_data function is thus just an adapter.

OCT-file code

The second bullet in our list, and how the OCT-file interfaces the MSVC code is best illustrated using an example (note that most error checking has been omitted for clarity).

File oct_get_data.cpp:

#include <octave/oct.h>

#include <fstream>

#include <windows.h>

#include <dMatrix.h>


// function pointer to the MSVC fuunction

typedef double* (*msvc_gdfunc)(const char* file_path, const char* data_id, long* nsamp);


// Octave function declaration

DEFUN_DLD (oct_get_data, args, nargout, "oct_get_data String")


   int nargin = args.length ();

   if(nargin < 2) {

      octave_stdout << "oct_get_data called with "  

                    << nargin << " input and "

                    << nargout << " output arguments.\n";

      octave_stdout << "Usage:  oct_get_data(file_path,data_id); \n";



   // get the arguments from the Octave call

   int iarg=0;

   std::string file_path   = args(iarg++).char_matrix_value().row_as_string(0);

   std::string data_id     = args(iarg++).char_matrix_value().row_as_string(0);


   // load the MSVC DLL dynamically using Windows API

   HMODULE hdll = LoadLibrary("msvc_comp.dll");

   if(hdll != NULL) {

      // get the function pointer

      msvc_gdfunc msvc_get_data = (msvc_gdfunc)GetProcAddress(hdll,"msvc_get_data");

      if(msvc_get_data) {                 

         // call the function in the MSVC dll

         long nsamp = 0;

         if(double* values = msvc_get_data(file_path.c_str(),data_id.c_str(),&nsamp)) {

            // assign output data, it returns data column major order, 2 columns

            Matrix result(nsamp,2);

            for(int j=0;j<2;j++) {

               for(int i=0; i<nsamp; i++) {

                  result(i,j) = *values++;



            return octave_value(result);




   // empty return value

   return octave_value_list();


The code above declares a function pointer type for our MSVC function. It then uses the Windows API to load the DLL dynamically + look up the pointer to the function. If found, it calls the MSVC function and constructs a suitable matrix object to return to the Octave application.

Obviously much more rigorous error checking is in order.

Compiling the OCT-file

As previously mentioned, the oct-file must be compiled using the GNU C++ compiler to be compatible with Octave. For this purpose we use the mkoctfile utility. It can be done within Octave, or via a Wiindows batch script as shown below.

File cppoct.bat:

@echo off

REM Script to compile C++ into Octave oct-files


REM configure Octave oct compiler

set OCT_VER=4.0.0

set OCT_HOME=C:\Octave\Octave-%OCT_VER%

set OCT_BIN=%OCT_HOME%\bin

set OCT_INC=%OCT_HOME%\include\octave-%OCT_VER%

set OCT_LIB=%OCT_HOME%\lib\octave\%OCT_VER%


REM set Octave bin dir first in path so g++ can be found



REM turn on echo so we can see what is going on as we compile

@echo on

%OCT_BIN%\mkoctfile.exe -I%OCT_INC% -I%OCT_INC%\octave  -L%OCT_LIB%  %1

@echo off

REM tidy up intermediate files

del *.o

Running this script for the oct_get_data.cpp file generates the oct_get_data.oct file.

Using the plugin in Octave

Once the oct-file and the msvc dll exist, it is recommended to store them in a common folder in the file system. In Octave, you then need to specify that folder using 'addpath'.

data = oct_get_data("myfile.dat","whatever");
plot( data(:,1), data(:,2));

Assuming the data returned was a matrix of X,Y data, the result could look something like below



If you find this useful, please add a comment below :-)

Code::Blocks with MSVC2013

This is a quick placeholder post on how to set up the C++ IDE Code::Blocks with the Microsoft Visual Studio 2013 tool chain. The main information is found in this PDF.

Quote from the PDF:

“This document aims to explain a way to configure Code::Blocks on Windows using different Visual
Studio C++ compilers and Windows SDKs. Where relevant it is discussed how to compile for x86
(32bit) and x64 (64bit) executables.

The general approach taken is to employ the Code::Blocks global variables for configuration of the
compiler and related tools, i.e. set the global variable values according to the needs of the compiler,
SDK and target processor architecture. Examples are provided for MSVC2010 and MSVC2013. By
extension, it should be feasible to reconfigure for other MSVC versions.”

If you have questions or comments to the method described, please comment below.

A wheel centre cap

In the post about printing 3d gears, we saw that it was possible to print replacement gears for car parts. I have now received a report that the printed gear works after several weeks of in-car testing, so let us count that as a success. In fact, it was so successful that I got a request to print another part that was missing; a press-fit wheel centre cap, original as below.


The owner also wanted the logo on the replacement part. When you don’t pay, there is no limit to what you can ask for :-) Anyway, I thought we might give it a try.

First step was simply to place the original on the flatbed scanner and make an image of the logo. I could have found the logo on the web, but that is cheating. Instead the scanned image was imported into Photoshop and turned into a monochrome image and blurred/clipped and saved to a PNG file.


Then, OpenSCAD  was fired up, and the following script was edited


In the above code, the d1 to d4 parameters define measured diameters (using a caliper) on the original. d1 is the outermost diameter. Similarly h1 to h4 define the heights measured from the bottom when logo is pointing down.

The “logo()” module imports the scanned image and turns it into a 3d object. A slice of that is created by intersecting it with a “cube” (actually a cuboid). The intersection is then scaled, rotated and translated to fit the size and orientation of the printed object.

The “bottom()” module is simply a short cylinder minus the logo at bottom and a smaller cylinder on top, to create a “rim” on the bottom part.

The “teeth()” module describes the 2d profile of the teeth that grips the wheel and then performs a rotational extrude (360 degrees). This is then intersected with the result of the “cross()” module which simply defines a cross from 2 cuboids. The result is 4 teeth, separated by 90 degrees.

All in all, less than 60 lines of code. We then get this OpenSCAD model to export as an STL file.


There are many ways to process an STL file, but generally it needs to be run through a “slicer” program to generate the G-code that a printer can understand. There are many very good slicer programs, including slic3r and Cura, but recently I have been using KISSlicer, as it has many nice customization options.


After completing the slicing, we have the G-code to send to the printer. I am using OctoPrint running on a wireless Raspberry Pi to control the printer, so the G-code is sent to OctoPrint via the web browser on the PC. OctoPrint can also display the temperature of the hot end and the heated bed. All we have to do is check that the printer calibration is ok and commit the print:


When finished, we have something that closely resembles the OpenSCAD model.


When we turn the print around, we also see something that resembles the logo. It is not perfect, but it is there. One idea is to fill the void with some dark filler and sand the top surface a bit. Then it might pass :-)


A challenge with a part like this is that the printed part is relatively brittle compared to the original, so it is hoped that the teeth simply do not break off. This is why the printed teeth are made wider than in the original, where it is only the smaller teeth that grip the wheel.

260 000 images on a Raspberry Pi

In the previous post the op_lite object oriented database library for C++ was introduced.  I have been testing this library on Windows and Kubuntu using images from the Raspberry Pi1 Model B weather camera. The camera captures a JPEG image of size 1296×972 every minute, which means that each day there are 1440 additional images to put in the weather camera database. The database now has a viewer written in C++ based on op_lite and wxWidgets, It works fine on both Windows and Linux.

The PI weather camera has been  running steady for just about 6 months now, it has so far accumulated just over 260 000 images (database size is > 20GB) showing the daily weather plus stars at night. Today I wanted to try op_lite and the viewer on another Pi1 Model B, so I compiled the database viewer application there. This is straightforward as Raspbian is a debian derivative, just like Kubuntu.

The the 20GB database was copied from Windows across the LAN network to the PI which has a 320GB USB hard drive connected, formatted as linux ext4. The copy took a few minutes and the compilation of the software took longer, but it worked!  It shows that the database is compatible and can be freely copied around Windows/Kubuntu/Raspberry PI.

260 000 images captures on a PI1, viewed on another. It doesn’t work as fast as on a desktop, but it is certainly usable. I have a new PI2 Model B coming soon, and  it will be interesting to see how things performs there.  As the PI2 is said to be about ~6 times faster than the PI1, it should be good!


op_lite – OODB library for C++

This post announces the existence of a class library for C++ called op_lite.  It is the result of something I have been wanting to do for quite some time, so this is a kind of major milestone. The library is a portable, lightweight object persistence library for C++ for Windows (MSVC2010) and Linux (g++), it makes it easy to write C++ applications with in-process persistent objects, i.e. objects that live within a database file.

Download version V1.0-00:  source code and white paper.   If you download and try the library, I would appreciate your comments below this post.

Update 16-Oct-2015: Version V1.0-02 source code available.



In the world of C++, a large number of excellent open source libraries exist for almost any conceivable purpose. In addition, most of these libraries are cross platform, i.e. they may be used under several different operating systems.  Many database libraries also exist, but In this author’s opinion, it is a problem that the word ‘database’ for many people is synonym with a traditional relational database based on some form of SQL. Although these databases are very powerful, the programming model they impose is not supporting object oriented programming.

What is missing is an open source and portable database library supporting object orientation, allowing the developer to use native C++ classes with persistent instances living naturally in the database. Such systems do exist, but there are unfortunately few open source libraries in this ‘pure’ category. op_lite’s objective is therefore to provide a single process object oriented database library for C++ applications, with support for persistent containers and polymorphic pointers.

The name op_lite stands for Object Persistence – Lightweight. It is a C++ library that offers automatic in-process object persistence of C++ objects, the application code never explicitly reads or writes to the database – it all happens behind the scenes, given that certain programming patterns are followed. This is similar to most other “real” OO databases. The effect is that the objects are perceived to “live in the database”.

Design and implementation

op_lite is implemented as a small C++ class library. The library provides helper classes for managing databases, base classes for deriving user defined persistent classes, and classes for declaring persistent member data.  From the figure below it is clear that op_lite relies on SQLite for low level implementation. Several libraries exist that encapsulate relational databases, but op_lite tries to take a different approach than most of these. In op_lite, the use of SQLite is mostly considered an implementation detail, however a very useful one. Reading the source code of an application using op_lite will not show many signs of SQLite being used, it is mostly hidden from view.


All in all, the reasons for using SQLite as back end are
– it is a zero-configuration, in-process engine
– it is proven technology, extensively tested
– it is very efficient
– it is open source
– it is very well documented
– it is portable (both source code and databases)
– it supports virtually unlimited size databases
– it allows using standard SQLite tools for special purpose operations
– it relieves the author of op_lite of developing a competing back end :-)

Using the library

For an in-depth description of the library, please see the white paper mentioned early in this post.  Here,  we just give a small taste for what it looks like. Assuming the application needs a 2-dimensional Point class, containing x- and y- floating point coordinates, declaring it as a persistent class using op_lite may look like something this:


An op_lite persistent class needs to inherit from op_object, or from another class derived from op_object. It also needs to have a default constructor, plus override the pure virtual function op_layout declared in op_object. The persistent data members are declared using op_double, one of the supported persistent types:


Looking at the Point.cpp implementation, we find more characteristics of persistent classes. Notice that persistent members must be initialised using op_construct taking the member variable as a parameter or op_construct_v1 which also takes an initialisation value of the corresponding transient type.


Furthermore, we notice the use of op_bind  in the op_layout overload. Here, each member variable is “bound” so that it will appear in the database.  This is all that is required to read and write data!  Once we have a persistent class like Point, we can create persistent objects in a database this way:


In this tiny example, several important aspects are illustrated. One is the use of op_mgr() to create or access databases. In this case we create a new database file with internal logical name “poly_shapes” , stored in the given file path “db_path”.

The next thing that happens is that an op_transaction is declared as a stack object. This starts a database transaction, plus it is a clean way of making sure that the Point instances do not cause memory leak at the termination of the if scope. The transaction causes the objects created within the scope to be committed to the database, plus the transient cache objects are automatically removed, while the persistent objects remain in the database. Finally, the database file is closed.

For more details, including how to restore persistent objects from the database, see the poly_shapes example code in the test example folder that is found in the source code download.

Other features

Sometimes, a persistent object contains a pointer to another persistent object, possibly of a different type.  This is done using the op_ptr<T> template. For example, a persistent pointer to a Point is declared as op_ptr<Point> .  When  op_ptr<Point> is stored in the database, it is represented as text:  “0 Point 123”. The first value (zero) indicates the format, the second value (class name) indicates the concrete type of the object, and the third value is the object’s persistent identifier.

All you have to do to get at this object is to dereference the op_ptr<Point>  variable using the –> operator, just like normal pointers. When you do that, op_lite will create a cached Point instance on your behalf and initiate the member variables with whatever is store in the database. However, this means that op_lite must know which C++ class to instantiate when it sees a text like “Point”. This is achieved via the “type factory”, where an application declares the persistent classes it is using. Such code must be executed at each startup of an op_lite application.


Another interesting capability is to be able to use C++ container classes as member variables, as in the example below. In this case, the ShapeCollection  is a persistent class that has a persistent vector of persistent polymorphic pointers as a member variable. Implementation of persistent containers is achieved using of the MessagePack library, which is provided as part of the source code.


The following “cheat sheet” provides an overview of the various classes provided in op_lite. For a more complete description, see the white paper and the provided example source code.


Building op_lite from source code

There are 2 ways op_lite can built, either using the Code::Blocks project file, or via the provided makefiles generated from the Code::Blocks project file. Using the makefiles is the easiest option as they have been prepared to have few dependencies. Before building op_lite you must download and build boost and MessagePack .

Building with ‘Makefile.msvc’ on Windows

The file ‘Makefile.msvc’ builds op_lite on Windows using MS Visual Studio 2010 (Express or full edition). To use the makefile, open the “Visual Studio Command Prompt (2010)” from the Windows start menu, navigate to the op_lite source directory. Edit the file Makefile.msvc and adjust the two lines on the top, so they point to where boost and MessagePack have been installed and built (replace the bold parts below)

MSGPACK_INCLUDE = E:\\cpde3\\zdep\\3rdparty\\msgpack\\msgpack-c\\include
BOOST_INCLUDE = E:\\cpde3\\zdep\\3rdparty\\boost\\boost_1_55_0

Then run the makefile

$ nmake -f Makefile.msvc

The generated op_lite.lib and op_lite.dll files are found in the .cmp\msvc\bin\Release subfolder.

Building with ‘Makefile’ on Linux

The file ‘Makefile’ builds op_lite under Linux using g++. To use the makefile, open a terminal window in the op_lite source directory. Edit the file Makefile and adjust the two lines on the top, so they point to where boost and MessagePack have been installed and built.

MSGPACK_INCLUDE = /usr/local
BOOST_INCLUDE = /home/ca/home_work/cpde_root/zdep/3rdparty/boost/boost_1_55_0/

Then run the makefile

$ make

The generated shared object library is found in the .cmp\gcc\bin\Release subfolder.

Feedback wanted

I would like to have your feedback on this library, please comment below. If the interest is sufficient, the library could move to github or similar places. Right now I publish it here as a beta for review. I hope you enjoy it :-)

A second experiment: Boolean operations

In the previous post, it was shown how it is possible to convert a real object into a 3d computer model, suitable for replication using a 3d printer.  This was done simply using a simple flatbed scanner and some software.  The object chosen there (the wrench/spanner) was 2-dimensional if you ignore the thickness, so some may say this was cheating a bit. Could we achieve a similar effect with a more truly 3-dimensional object? The following object is our second replication challenge:


This object is not entirely flat, so it is a more challenging task to create a virtual replica of it.  If we put it on the flatbed scanner and scan it from 2 projections, from below and from the side, we get the result below (scanner lid open). The only thing done here is to present the two projections in the same image and crop away irrelevant areas to the left and right.


We now give these images the same treatment as in the first experiment. That means stretching the histogram, blurring the surfaces and using curve tools in a bitmap editor.  The goal is to emphasize the edges in the to projections, and remove anything else in the images. Below, the resulting projections are shown together for illustration purposes, but observe that each projection is treated separately.


We then give both of these  images the same treatment as before, using potrace, inkscape and pstoedit. Again, the results are simplified profiles in DXF file format, using only LINE segments:





This time, we employ some more of the powerful tools of OpenSCAD, that is ‘Boolean operations’.  For the uninitiated it can be compared to mathematical set operations,  for example intersection, union and difference.  But instead of operating on mathematical sets, OpenSCAD operates on 3-dimensional solid objects. Watch what happens if we define 3 solid objects (box, thing_A and thing_B) and subtract them from each other in the right order:


Not bad, huh? A small miracle… Again, how did this happen? Look at the solids we used. Below shows “thing_A” in yellow and “thing_B” in transparent grey. These were the bodies extruded from the image projections.


We may compare “box” (red) and “thing_A”  (transparent grey) in a similar manner:


What happens is two subsequent Boolean operations:

1. The red box is the original positive body, and “thing_A” gets subtracted from it.  That makes the “thing” without the holes.

2. Then, “thing_B” is subtracted from the result of 1. It is as if the holes get punched out using a punching tool. In many ways, that is exactly what happens.

The final result is the green “thing” as shown in OpenSCAD above. We can also save this as an STL file,  a collection of 3d triangles, and present them in wireframe mode:

Such triangles are what 3d printers need. Or to be more precise, it is the starting point of 3d printing. When printing, the triangles are cut with horizontal planes from bottom to top, also a kind of Boolean operation, the resulting intersections are horizontal line segments that can be used to generate G-code to steer the printer motors.

But that subject is for some other time.

A reverse 3d-printing experiment

I am in the process of buying a 3d-printer, so a good idea is to look at ways of creating 3d models to print. I have some time yet until I get the printer, so it is a good time to learn about the software you may need to master. Of course, the printer has its own software, but you also have to use other programs that are independent of the actual printer. In this post we shall look at some possibilities using mostly free, open source programs. The main exception is use of Photoshop, but I presume Gimp or even inkscape could do the same job as Photoshop here, I’m just using what I know a bit better.

The starting point when using a 3d printer is a virtual 3d computer model of the object you are printing. But sometimes you have a real object and want to create a 3d replica. This will be the subject of our “reverse 3d-printing experiment”:  Create a 3D computer replica of a real object. Below is our test specimen, a nostalgic object as it is a metal wrench (or ‘spanner’ if you are in the UK) I got as a kid. It came with my very first bicycle.  Can we make a computer replica?


If we place the wrench on our cheap flatbed scanner, maybe there is a way to obtain an accurate profile of it? let us try:


Below left is the raw output from the flatbed scanner,  a BMP file.  On the right is the same image after slight manipulation using an old Photoshop CS2. The features employed was to select the background with the “colour range” feature,  adding some feathering to create a smooth edge.  Then one left-over background area was clipped.  After that, the inverse of the selection was chosen, and the “levels” feature was used to blacken the wrench. Finally some “Gaussian blur” was applied to the whole image in order to soften the edges even more, and remove any remains of edge highlights from the scanning. The result is basically a black and white image of the wrench. But it is still just a raster image.


What we need is a vectorized representation of the wrench edges.  The following steps are a little convoluted, but It can really be simplified by improving the DXF file support in one of the open source programs. But until we have that, we can do the following:

Vectorizing the bitmap image

The first thing we do is to run potrace, a program  that boasts the feature we want:  “Transforming bitmaps into vector graphics” . We are going to require a file in DXF format, describing the wrench profile, and potrace can generate DXF files. However, it creates a DXF file with some features not understood by other programs, so we have to take a detour via SVG format and Encapsulated Postscript (EPS) format before we return to a simpler representation of DXF that can be used. That means a version with only simple LINES. Below is how I did it, using both Windows and linux along the way. First we run ‘potrace’ to get the SVG file from the fixed-up BMP file:


Let us copy that SVG file over to a Linux Kubuntu machine and run a couple of programs there. First we install inkscape and another program called pstoedit, based on some tips found here.

$ sudo apt-get install inscape
$ sudo apt-get install pstoedit

Now that we have the required software to complete our vectorization detour, let us use them.  First we create an intermediate EPS file using inkskape

$ inkscape -E intermediate.eps Wrench_fix.svg

Second we create the final, simplified DXF file using pstoedit, using the option “-polyaslines” to create a simplified DXF file with individual, straight lines. No polylines or spline curves. The final vectorized file is ‘wrench_os.dxf’ here

$ pstoedit -dt -f dxf:-polyaslines\ -mm intermediate.eps wrench_os.dxf

We can now open and view the created DXF file in for example LibreOffice and observe what we have created. It is no longer a bitmap image, but instead a trace of the wrench edges, i.e. a series of vectors.


Creating a 3D model

This is where the fun begins in earnest.  There is a really good, and totally free program called OpenSCAD which has some extremely powerful features that enables modelling of 3D objects.  This includes so called “Boolean operations” in CSG modelling, but also features for extruding 3D objects from 2D profiles like we have just created.  So let us try the following single command in OpenSCAD and watch what happens:

linear_extrude(height = 10) import(“Wrench_os.dxf”);

From that single line, we got something we recognise!image

What happened here? We had created the wrench profile in the DXF file. To understand what happened, you can read the above OpenSCAD commands right to left.

First, we imported the DXF file containing a profile in the XY-plane. Second, we extruded (a ‘sweep’ if you prefer) the complete profile 10 units in the Z-direction. The result was a totally recognizable virtual 3d wrench, looking just like the original, nostalgic bicycle wrench. 

With this model, we have everything required for creating a 3d printed replica, the next logical step in such a printing process would be to create an STL-file, which simply contains a number of 3-dimensional triangles describing the outer surface of the wrench model.


To prove that it works, we can view the generated STL file in a free STL viewer (chosen by random):


The STL file is available (zipped) here.


There are some incredibly powerful and free software tools available that can be used in combination with a bit of creativity to arrive at some rather impressive results. This is just great. OpenSCAD is a key tool, so this author will spend some time learning it better.  A great introduction to OpenSCAD are these tutorials (recommended):

How to use Openscad (1), tricks and tips to design a parametric 3D object
How to use Openscad (2): variables and modules for parametric designs
How to use Openscad (3): iterations, extrusions and more modularity!
How to use Openscad (4): children and advanced topics

There will be more on 3d printers from this blog.

Sunny Sunday time-lapse

Today was a nice and sunny winter Sunday, with outside temperatures around -6°C.  From the weather camera capturing an image every minute, a time-lapse video covering approximately 12 hours is assembled. It shows the changing weather conditions plus a few cross country skiers enjoying themselves in the sunshine. It takes about 3.5 minutes to show. I think you also agree that the Raspberry PI weather camera does a pretty good job with the heater system keeping it in focus!

Time-lapse 25. Jan 2015


The video was created automatically by first generating a label in the bottom left corner of each image. The information about exposure and camera temperature sensors is taken from XML files generated by the on-board RPI software.

Second, 3 interpolated images between each pair of originals were created, effectively giving the impression of images taken every 15 seconds.  The XML generation, image interpolation and labelling software is home grown.

The final step was time-lapse video generation using ffmpeg.  A bash script running under Kubuntu orchestrates the whole thing,  but a very similar process can easily be achieved under e.g. Windows.

Raspberry Pi Lens Heater

Happy New  Year!  Again it has taken some time since the last blog entry, due to Christmas activities taking priority. However, quite a few things have been happening with the Raspberry Pi weather camera in December.  Just over Christmas we had a cold snap, and I noticed that the camera images appeared degraded in the cold, they used to be much sharper before, didn’t they? I decided to check, here is a comparison for the same time of day, comparable weather, but different temperatures on 26. Dec (-15C) and 01. Jan (+3.8C).


You don’t have to be a rocket scientist to see that the image quality is degraded at lower ambient temperature. When comparing with older images from September 2014 when it was much warmer it is also clear that the relative sharpness of the 01.  Jan image is also degraded compared to the warmer September days.

I looked around to see if this was a known problem, and I did find a report of a very similar problem by someone in Germany (?), where problems with focus were correlated with cool temperatures.  More looking around landed me at a page with a lot of technical specifications for the Raspberry Pi. There is an interesting quote there that says “The threaded focus adjustment is set to infinity at the factory. Changing the focus from infinity to something closer requires that you turn the threaded lens cell counter-clockwise, moving the lens further away from the imaging sensor”  .

In other words, the Pi camera lens has fixed focus unless you start messing with it.  It is a so called Extended Depth Of Field (EDOF) lens, which essentially is using some clever optical and internal processing tricks to reach a pretty decent, sharp image without the user having to perform any focusing at all. Note that it is not an auto-focus system, the lens is fixed at all times. So it is a decent and user friendly compromise if you operate the camera within the design specifications.

Based on this, I suspected that there was an assumption of optimal temperature built into the PI camera EDOF system, for example an assumption room temperature (say, +20C).  My take would then be that low winter temperatures causes the lens assembly to contract/deform in such a way that the lens is effectively brought slightly closer to the imaging sensor, resulting in a “beyond infinity” focus setting, and thus rather blurred images in the cold.

This is bad news if you are using the PI camera for outdoor imaging at low temperatures, but is it possible to do something about it? Of course, one might consider changing the lens focus position by rotating the lens, but this is not practical in this case, considering the tiny lens and doing it in the cold.

However, heating the camera board to near room temperature is perhaps more feasible? From amateur astronomy we know dew heaters are made by coupling a number of resistors in parallel and sending a small amount of current through them, generating something like 2W in total to heat the optical surfaces and thus avoid dew.


We could perhaps make a similar system using just a few resistors and generate something like 0.5W to heat just the camera board?  The general idea is illustrated at left, using 3×150 Ohm resistors in parallel. If you apply 5V to this setup, it will generate 0.5 Watt. By placing the resistors close to the lens assembly, then much of the generated heat will be transferred to lens.  By also measuring the temperature close to the lens one can determine how long/much the lens should be heated and optionally turn it on/off automatically as required. To make this work,  one needs to insulate the resistors so they don’t lose the generated heat too fast. I decided to embed the resistors in melted plastic, as I had some hobby plastic that could be used. I made a simple form, put the resistor assembly in it, poured plastic over it and melted the plastic with a heat gun.  After cooling and adjustments I had a basic lens heater element,  shown below. The heat from the resistors will not dissipate as easily and will probably also cause a slightly more uniform heating around the lens.


I could have put a small DS18B20 temperature sensor into the melted plastic, but I was unsure about whether it would survive.  So instead I cut a trace for it after the plastic had cooled and glued in place between the red markings in the image below left. Before that I had soldered suitable wires to it. At the same time, holes were drilled to match the existing holes in the PI camera board. Here one needs to be accurate with the separation and placement of the holes.  In the end I used M2 machine screws and a piece of plastic on the back side of the camera board with similar holes to hold the assembly in place. The purpose of the plastic on the back is to insulate both electrically and also temperature-wise.


The DS18B20 temperature sensors are quite “friendly”, as you can connect many such sensors to the same wire/GPIO pin, they are so called “1-wire” sensors, although you also need wires for current. Once connected to the GPIO pins, the measurements turn up as small text files you can read.  The method I use for reading the temperature sensors is described here and  especially here . Since the weather camera images are scheduled using crontab, the driver for using 1-wire  sensors must be loaded at system start up, there is a page describing how to do that here.


In a previous post, I discussed how to control a relay, and now this could come into use.  Quite possibly, one does not want the heater to be on at all times. We have very variable temperatures during the winter, and when the sun is high in the summer it can get rather warm. So a system for control the applied heat is required. If a separate power supply is used and the socket is accessible, one may do it manually. But in this case, the socket is not conveniently located and I eventually want 100% automatic temperature control.

A simple solution to this is to use the relay board,  allowing the power to the lens heater to be controlled from the PI itself, even when the heater is powered from a separate power supply. Then it also becomes possible to automate the heater control, by evaluating the temperature sensor embedded in the heater. Typically, one may want to turn on the heater if the temperature drops below +10C and turn it off when it exceeds +20C, or something of that nature. Such a thing is possible to do from software,  using the relay.

Adapting a relay board inside the camera housing was not part of the original weather camera design. The PI was simply placed on a  85mmx123mm aluminium plate that slides into the tracks in the inside of the camera housing. To fit the relay, I found a piece of unused plastic that could serve as a relay board holder. Adapting the relay board required a couple of holes to be drilled, and checking that the final assembly still fit inside the housing.


There are probably far more elegant ways of connecting it all than what is shown below, it is a bit of a “bird’s nest” :-) But it shows where the temperature sensors are and also that the wires are connected to a 28 pin pin-header on the far side, so nothing is soldered directly to the GPIO Pins of the PI (Model B in this instance). If required, the whole thing can easily be disassembled.


In the image above, all wiring is done, except for the power to the heater and power to the PI itself.  Notice that he PI camera board + lens heater assembly is placed on the front side of the plate holding it. This is different from before, and places the camera closer to the front glass.

The PI is powered via a micro USB cable (black in the image below), and the lens heater is connected to the red/black power cable via the relay. The whole thing then slides inside the housing, using the tracks on the inside walls.


A likely improvement and simplification if I was to make it again, would be to integrate the plate the PI sits on with the actual holder of the camera/heater as one piece.  Another likely improvement would be to create a “breakout board” for both the relay and temperature sensors. That would eliminate the need for much of the messy wiring. So the current solution should be considered a prototype.

Below is the new front of the camera, now a combination of the old dew fix and the new lens heater. The lens heater in this configuration should also help to prevent dew, since the back side of the glass is now heated.


Initial test results

Once assembled, I was eager to test it. First step was to check that the camera still worked, and that it was possible to read both temperature sensors. The 1-wire sensors kept their promise and both showed data. Each sensor has a unique serial number, but you cannot say which is which by looking at the serial number, you have to observe the behaviour.  After some simple experimentation, I was able to determine the “body” sensor and the “cmos” sensor  and their respective serial numbers.

Then the real test began, by applying current to the lens heater. To switch the relay on/off, I used the C code found in the article Raspberry Pi – Driving a Relay using GPIO, i.e. the same method as in Raspberry Pi – Controlling a Relay.

After switching on the heater current, the “cmos” temperature sensor started to report higher values. Success! During the initial testing, the outside temperature was about +2C, and before heating began the cmos temperature sensor reported just over 5C. After switching on the 5V heater current, the temperature increased gradually over 45-60 minutes until it stabilised around +17C. This was pretty good!  If more power is needed, it is possible to run the heater at 6V or higher, one just needs to check that the power rating of each resistor is not exceeded, we don’t want anything to catch fire!

How about image quality? This is the best part, the sharpness is dramatically improved. At the time of writing, it is dark. But I will write a new post tomorrow,  comparing daylight images to previous images at the similar ambient temperature conditions.

To conclude, the heater works and it has the desired image quality effect!