260 000 images on a Raspberry Pi

In the previous post the op_lite object oriented database library for C++ was introduced.  I have been testing this library on Windows and Kubuntu using images from the Raspberry Pi1 Model B weather camera. The camera captures a JPEG image of size 1296×972 every minute, which means that each day there are 1440 additional images to put in the weather camera database. The database now has a viewer written in C++ based on op_lite and wxWidgets, It works fine on both Windows and Linux.

The PI weather camera has been  running steady for just about 6 months now, it has so far accumulated just over 260 000 images (database size is > 20GB) showing the daily weather plus stars at night. Today I wanted to try op_lite and the viewer on another Pi1 Model B, so I compiled the database viewer application there. This is straightforward as Raspbian is a debian derivative, just like Kubuntu.

The the 20GB database was copied from Windows across the LAN network to the PI which has a 320GB USB hard drive connected, formatted as linux ext4. The copy took a few minutes and the compilation of the software took longer, but it worked!  It shows that the database is compatible and can be freely copied around Windows/Kubuntu/Raspberry PI.

260 000 images captures on a PI1, viewed on another. It doesn’t work as fast as on a desktop, but it is certainly usable. I have a new PI2 Model B coming soon, and  it will be interesting to see how things performs there.  As the PI2 is said to be about ~6 times faster than the PI1, it should be good!


op_lite – OODB library for C++

This post announces the existence of a class library for C++ called op_lite.  It is the result of something I have been wanting to do for quite some time, so this is a kind of major milestone. The library is a portable, lightweight object persistence library for C++ for Windows (MSVC2010) and Linux (g++), it makes it easy to write C++ applications with in-process persistent objects, i.e. objects that live within a database file.

Download version V1.0-00:  source code and white paper.   If you download and try the library, I would appreciate your comments below this post.



In the world of C++, a large number of excellent open source libraries exist for almost any conceivable purpose. In addition, most of these libraries are cross platform, i.e. they may be used under several different operating systems.  Many database libraries also exist, but In this author’s opinion, it is a problem that the word ‘database’ for many people is synonym with a traditional relational database based on some form of SQL. Although these databases are very powerful, the programming model they impose is not supporting object oriented programming.

What is missing is an open source and portable database library supporting object orientation, allowing the developer to use native C++ classes with persistent instances living naturally in the database. Such systems do exist, but there are unfortunately few open source libraries in this ‘pure’ category. op_lite’s objective is therefore to provide a single process object oriented database library for C++ applications, with support for persistent containers and polymorphic pointers. 

The name op_lite stands for Object Persistence – Lightweight. It is a C++ library that offers automatic in-process object persistence of C++ objects, the application code never explicitly reads or writes to the database – it all happens behind the scenes, given that certain programming patterns are followed. This is similar to most other “real” OO databases. The effect is that the objects are perceived to “live in the database”.

Design and implementation

op_lite is implemented as a small C++ class library. The library provides helper classes for managing databases, base classes for deriving user defined persistent classes, and classes for declaring persistent member data.  From the figure below it is clear that op_lite relies on SQLite for low level implementation. Several libraries exist that encapsulate relational databases, but op_lite tries to take a different approach than most of these. In op_lite, the use of SQLite is mostly considered an implementation detail, however a very useful one. Reading the source code of an application using op_lite will not show many signs of SQLite being used, it is mostly hidden from view.


All in all, the reasons for using SQLite as back end are
- it is a zero-configuration, in-process engine
- it is proven technology, extensively tested
- it is very efficient
- it is open source
- it is very well documented
- it is portable (both source code and databases)
- it supports virtually unlimited size databases
- it allows using standard SQLite tools for special purpose operations
- it relieves the author of op_lite of developing a competing back end :-)

Using the library

For an in-depth description of the library, please see the white paper mentioned early in this post.  Here,  we just give a small taste for what it looks like. Assuming the application needs a 2-dimensional Point class, containing x- and y- floating point coordinates, declaring it as a persistent class using op_lite may look like something this:


An op_lite persistent class needs to inherit from op_object, or from another class derived from op_object. It also needs to have a default constructor, plus override the pure virtual function op_layout declared in op_object. The persistent data members are declared using op_double, one of the supported persistent types:


Looking at the Point.cpp implementation, we find more characteristics of persistent classes. Notice that persistent members must be initialised using op_construct taking the member variable as a parameter or op_construct_v1 which also takes an initialisation value of the corresponding transient type.


Furthermore, we notice the use of op_bind  in the op_layout overload. Here, each member variable is “bound” so that it will appear in the database.  This is all that is required to read and write data!  Once we have a persistent class like Point, we can create persistent objects in a database this way:


In this tiny example, several important aspects are illustrated. One is the use of op_mgr() to create or access databases. In this case we create a new database file with internal logical name “poly_shapes” , stored in the given file path “db_path”.

The next thing that happens is that an op_transaction is declared as a stack object. This starts a database transaction, plus it is a clean way of making sure that the Point instances do not cause memory leak at the termination of the if scope. The transaction causes the objects created within the scope to be committed to the database, plus the transient cache objects are automatically removed, while the persistent objects remain in the database. Finally, the database file is closed.

For more details, including how to restore persistent objects from the database, see the poly_shapes example code in the test example folder that is found in the source code download.

Other features

Sometimes, a persistent object contains a pointer to another persistent object, possibly of a different type.  This is done using the op_ptr<T> template. For example, a persistent pointer to a Point is declared as op_ptr<Point> .  When  op_ptr<Point> is stored in the database, it is represented as text:  “0 Point 123”. The first value (zero) indicates the format, the second value (class name) indicates the concrete type of the object, and the third value is the object’s persistent identifier.

All you have to do to get at this object is to dereference the op_ptr<Point>  variable using the –> operator, just like normal pointers. When you do that, op_lite will create a cached Point instance on your behalf and initiate the member variables with whatever is store in the database. However, this means that op_lite must know which C++ class to instantiate when it sees a text like “Point”. This is achieved via the “type factory”, where an application declares the persistent classes it is using. Such code must be executed at each startup of an op_lite application.


Another interesting capability is to be able to use C++ container classes as member variables, as in the example below. In this case, the ShapeCollection  is a persistent class that has a persistent vector of persistent polymorphic pointers as a member variable. Implementation of persistent containers is achieved using of the MessagePack library, which is provided as part of the source code.


The following “cheat sheet” provides an overview of the various classes provided in op_lite. For a more complete description, see the white paper and the provided example source code.


Building op_lite from source code

There are 2 ways op_lite can built, either using the Code::Blocks project file, or via the provided makefiles generated from the Code::Blocks project file. Using the makefiles is the easiest option as they have been prepared to have few dependencies. Before building op_lite you must download and build boost and MessagePack .

Building with ‘Makefile.msvc’ on Windows

The file ‘Makefile.msvc’ builds op_lite on Windows using MS Visual Studio 2010 (Express or full edition). To use the makefile, open the “Visual Studio Command Prompt (2010)” from the Windows start menu, navigate to the op_lite source directory. Edit the file Makefile.msvc and adjust the two lines on the top, so they point to where boost and MessagePack have been installed and built (replace the bold parts below)

MSGPACK_INCLUDE = E:\\cpde3\\zdep\\3rdparty\\msgpack\\msgpack-c\\include
BOOST_INCLUDE = E:\\cpde3\\zdep\\3rdparty\\boost\\boost_1_55_0

Then run the makefile

$ nmake -f Makefile.msvc

The generated op_lite.lib and op_lite.dll files are found in the .cmp\msvc\bin\Release subfolder.

Building with ‘Makefile’ on Linux

The file ‘Makefile’ builds op_lite under Linux using g++. To use the makefile, open a terminal window in the op_lite source directory. Edit the file Makefile and adjust the two lines on the top, so they point to where boost and MessagePack have been installed and built.

MSGPACK_INCLUDE = /usr/local
BOOST_INCLUDE = /home/ca/home_work/cpde_root/zdep/3rdparty/boost/boost_1_55_0/

Then run the makefile

$ make

The generated libop_lite.so shared object library is found in the .cmp\gcc\bin\Release subfolder.

Feedback wanted

I would like to have your feedback on this library, please comment below. If the interest is sufficient, the library could move to github or similar places. Right now I publish it here as a beta for review. I hope you enjoy it :-)

Åpent brev til Røyken Kommune (1)

To those who do not speak Norwegian: The following is an open letter to the local authorities of Røyken Kommune. Via a “consultant” they “request” that I install a water meter in my house, and in the same sentence they declare that if I object they will force me to do it. A time limit for compliance is given. In the same letter, they also declare it is their intention to apply extensive remote surveillance. There is no water shortage in Norway, quite the opposite, and no credible reason is given, other than they “want to reduce water consumption down to 2004 level”. I have paid all my taxes and bills since I moved here in 1993, I intend to continue to do so and I have no excessive water usage. I suspect the real reason behind this is to establish a way to increase pricing of basic resources like H2O, just like they already have started taxing the exhaling of harmless CO2. I am now waiting for the taxation of O2 intake to be established.  The second suspected reason is what is clearly stated in their letter: Extensive surveillance and control over citizens. All of this is supposedly decided locally. However,  it is centrally organised, disguised as “local democracy” implemented via unelected consultants working for unelected entities I have never heard of.

Nylig mottok jeg et brev fra en “konsulent” ved navn Brynhild Oddrunsdottir, som hevder å opptre på vegne av Røyken Kommune gjennom noe som kalles “Viva iks”. Jeg aner ikke hvem “Viva iks” er, og det er ikke gitt noen dokumentasjon på deres fullmakt til å opptre på vegne av politisk valgte myndigheter. Overskriften i brevet er som følger:


I brevet hevdes det at pålegget er et ledd i å “trygge leveransen av drikkevann” og at det totale vannuttaket av en eller annen uspesifisert grunn skal reduseres til “2004-nivå”. Det er ikke gitt noen rasjonell grunn for et slikt mål. Vi bor ikke i Sahara, tvert imot oppstår jevnlig overvann i området her jeg bor. Videre ser det ut til at den som har skrevet dette anser at lovlydige innbyggere som med selvfølgelighet har betalt kommunale avgifter i alle år, nå skal utsettes for vilkårlige pålegg, overvåking og ekstra utgifter.


Det hele framstilles som om det er et ledd i lokalt demokrati. Realiteten er at dette er styrt fra sentrale myndigheter og byråkrater, samt eksekvert gjennom obskure konsulentvirsomheter ingen har hørt om. Det som videre er er bekymringsfullt er at den sannsynlige reelle begrunnelsen ikke er uttalt: Etablere grunnlag for framtidig inntektsøkning gjennom økt skattlegging og begrensing av den enkeltes tilgang til fundamentale ressurser som vann (H2O), kombinert med økt overvåking.


Sentrale myndigheter har forlengst igangsatt skattlegging av utpust (CO2), uten noen rasjonell begrunnelse utover det som er “etablert” gjennom IPCC, inntil nylig ledet av jernbaneingeniøren Rajendra Pachauri, nå gjenstand for politietterforsking i India.  Nå følges dette opp gjennom uttalt ambisjon om begrenset tilgang på H2O, via overvåking som etablerer grunnlag for kontroll og skattlegging av noe vi allerede betaler for. Når kommer det brev om at vårt inntak av O2 må begrenses til 2004-nivå?

Jeg anmoder konsulenten å skaffe troverdig dokumentasjon som utvetydig viser at hun opptrer lovlig på oppdrag fra demokratisk valgte myndigheter. Brevet jeg har mottatt inneholder ingen slik dokumentasjon. Jeg anmoder også om en mer utfyllende informasjon om hvorfor “Målet er å reduserer det totale vannintaket til 2004-nivå”, samt hvilke andre tiltak kommunen har iverksatt for å oppnå dette målet. Er det den enkeltes vannuttak som skal begrenses til 2004-nivå, eller er det kommunens totale uttak det er snakk om? Hvordan forholder dette seg til endringen av innbyggertallet i kommunen?

Ved mottak av slik dokumentasjon vil jeg rette meg etter anmodningen, om enn ikke med særlig glede. Tilliten til lokale myndigheter er ikke styrket med dette.

A second experiment: Boolean operations

In the previous post, it was shown how it is possible to convert a real object into a 3d computer model, suitable for replication using a 3d printer.  This was done simply using a simple flatbed scanner and some software.  The object chosen there (the wrench/spanner) was 2-dimensional if you ignore the thickness, so some may say this was cheating a bit. Could we achieve a similar effect with a more truly 3-dimensional object? The following object is our second replication challenge:


This object is not entirely flat, so it is a more challenging task to create a virtual replica of it.  If we put it on the flatbed scanner and scan it from 2 projections, from below and from the side, we get the result below (scanner lid open). The only thing done here is to present the two projections in the same image and crop away irrelevant areas to the left and right.


We now give these images the same treatment as in the first experiment. That means stretching the histogram, blurring the surfaces and using curve tools in a bitmap editor.  The goal is to emphasize the edges in the to projections, and remove anything else in the images. Below, the resulting projections are shown together for illustration purposes, but observe that each projection is treated separately.


We then give both of these  images the same treatment as before, using potrace, inkscape and pstoedit. Again, the results are simplified profiles in DXF file format, using only LINE segments:





This time, we employ some more of the powerful tools of OpenSCAD, that is ‘Boolean operations’.  For the uninitiated it can be compared to mathematical set operations,  for example intersection, union and difference.  But instead of operating on mathematical sets, OpenSCAD operates on 3-dimensional solid objects. Watch what happens if we define 3 solid objects (box, thing_A and thing_B) and subtract them from each other in the right order:


Not bad, huh? A small miracle… Again, how did this happen? Look at the solids we used. Below shows “thing_A” in yellow and “thing_B” in transparent grey. These were the bodies extruded from the image projections.


We may compare “box” (red) and “thing_A”  (transparent grey) in a similar manner:


What happens is two subsequent Boolean operations:

1. The red box is the original positive body, and “thing_A” gets subtracted from it.  That makes the “thing” without the holes.

2. Then, “thing_B” is subtracted from the result of 1. It is as if the holes get punched out using a punching tool. In many ways, that is exactly what happens.

The final result is the green “thing” as shown in OpenSCAD above. We can also save this as an STL file,  a collection of 3d triangles, and present them in wireframe mode:

Such triangles are what 3d printers need. Or to be more precise, it is the starting point of 3d printing. When printing, the triangles are cut with horizontal planes from bottom to top, also a kind of Boolean operation, the resulting intersections are horizontal line segments that can be used to generate G-code to steer the printer motors.

But that subject is for some other time.

A reverse 3d-printing experiment

I am in the process of buying a 3d-printer, so a good idea is to look at ways of creating 3d models to print. I have some time yet until I get the printer, so it is a good time to learn about the software you may need to master. Of course, the printer has its own software, but you also have to use other programs that are independent of the actual printer. In this post we shall look at some possibilities using mostly free, open source programs. The main exception is use of Photoshop, but I presume Gimp or even inkscape could do the same job as Photoshop here, I’m just using what I know a bit better.

The starting point when using a 3d printer is a virtual 3d computer model of the object you are printing. But sometimes you have a real object and want to create a 3d replica. This will be the subject of our “reverse 3d-printing experiment”:  Create a 3D computer replica of a real object. Below is our test specimen, a nostalgic object as it is a metal wrench (or ‘spanner’ if you are in the UK) I got as a kid. It came with my very first bicycle.  Can we make a computer replica?


If we place the wrench on our cheap flatbed scanner, maybe there is a way to obtain an accurate profile of it? let us try:


Below left is the raw output from the flatbed scanner,  a BMP file.  On the right is the same image after slight manipulation using an old Photoshop CS2. The features employed was to select the background with the “colour range” feature,  adding some feathering to create a smooth edge.  Then one left-over background area was clipped.  After that, the inverse of the selection was chosen, and the “levels” feature was used to blacken the wrench. Finally some “Gaussian blur” was applied to the whole image in order to soften the edges even more, and remove any remains of edge highlights from the scanning. The result is basically a black and white image of the wrench. But it is still just a raster image.


What we need is a vectorized representation of the wrench edges.  The following steps are a little convoluted, but It can really be simplified by improving the DXF file support in one of the open source programs. But until we have that, we can do the following:

Vectorizing the bitmap image

The first thing we do is to run potrace, a program  that boasts the feature we want:  “Transforming bitmaps into vector graphics” . We are going to require a file in DXF format, describing the wrench profile, and potrace can generate DXF files. However, it creates a DXF file with some features not understood by other programs, so we have to take a detour via SVG format and Encapsulated Postscript (EPS) format before we return to a simpler representation of DXF that can be used. That means a version with only simple LINES. Below is how I did it, using both Windows and linux along the way. First we run ‘potrace’ to get the SVG file from the fixed-up BMP file:


Let us copy that SVG file over to a Linux Kubuntu machine and run a couple of programs there. First we install inkscape and another program called pstoedit, based on some tips found here.

$ sudo apt-get install inscape
$ sudo apt-get install pstoedit

Now that we have the required software to complete our vectorization detour, let us use them.  First we create an intermediate EPS file using inkskape

$ inkscape -E intermediate.eps Wrench_fix.svg

Second we create the final, simplified DXF file using pstoedit, using the option “-polyaslines” to create a simplified DXF file with individual, straight lines. No polylines or spline curves. The final vectorized file is ‘wrench_os.dxf’ here

$ pstoedit -dt -f dxf:-polyaslines\ -mm intermediate.eps wrench_os.dxf

We can now open and view the created DXF file in for example LibreOffice and observe what we have created. It is no longer a bitmap image, but instead a trace of the wrench edges, i.e. a series of vectors.


Creating a 3D model

This is where the fun begins in earnest.  There is a really good, and totally free program called OpenSCAD which has some extremely powerful features that enables modelling of 3D objects.  This includes so called “Boolean operations” in CSG modelling, but also features for extruding 3D objects from 2D profiles like we have just created.  So let us try the following single command in OpenSCAD and watch what happens:

linear_extrude(height = 10) import(“Wrench_os.dxf”);

From that single line, we got something we recognise!image

What happened here? We had created the wrench profile in the DXF file. To understand what happened, you can read the above OpenSCAD commands right to left.

First, we imported the DXF file containing a profile in the XY-plane. Second, we extruded (a ‘sweep’ if you prefer) the complete profile 10 units in the Z-direction. The result was a totally recognizable virtual 3d wrench, looking just like the original, nostalgic bicycle wrench. 

With this model, we have everything required for creating a 3d printed replica, the next logical step in such a printing process would be to create an STL-file, which simply contains a number of 3-dimensional triangles describing the outer surface of the wrench model.


To prove that it works, we can view the generated STL file in a free STL viewer (chosen by random):


The STL file is available (zipped) here.


There are some incredibly powerful and free software tools available that can be used in combination with a bit of creativity to arrive at some rather impressive results. This is just great. OpenSCAD is a key tool, so this author will spend some time learning it better.  A great introduction to OpenSCAD are these tutorials (recommended):

How to use Openscad (1), tricks and tips to design a parametric 3D object
How to use Openscad (2): variables and modules for parametric designs
How to use Openscad (3): iterations, extrusions and more modularity!
How to use Openscad (4): children and advanced topics

There will be more on 3d printers from this blog.

Sunny Sunday time-lapse

Today was a nice and sunny winter Sunday, with outside temperatures around -6°C.  From the weather camera capturing an image every minute, a time-lapse video covering approximately 12 hours is assembled. It shows the changing weather conditions plus a few cross country skiers enjoying themselves in the sunshine. It takes about 3.5 minutes to show. I think you also agree that the Raspberry PI weather camera does a pretty good job with the heater system keeping it in focus!

Time-lapse 25. Jan 2015


The video was created automatically by first generating a label in the bottom left corner of each image. The information about exposure and camera temperature sensors is taken from XML files generated by the on-board RPI software.

Second, 3 interpolated images between each pair of originals were created, effectively giving the impression of images taken every 15 seconds.  The XML generation, image interpolation and labelling software is home grown.

The final step was time-lapse video generation using ffmpeg.  A bash script running under Kubuntu orchestrates the whole thing,  but a very similar process can easily be achieved under e.g. Windows.

Mark2 heater board

In the Raspberry Pi Lens Heater post, the motivation for heating the RPI camera lens when operating in cold temperatures was given. A prototype version of a heater system was also presented, it has worked very well since early January. The images were significantly sharper when the lens is heated to around 15-20°C. Even well before that, the effect is significant.

Temperatures in January have been rather mild, around 0°C  (+/-5°C), but recently we had a day below -15°C  and the camera board temp sensor showed only +4°C at the lowest. Still, the star images and other details verified the heating had a good effect.

A friend showed interest in setting up a similar camera with heater, so that provided a chance to improve on the design. Clearly, the original “birds nest” implementation did leave something to be desired. My friend also wanted to accommodate 3 temperature sensors, one for the camera board (“CMOS”), one for camera house (“BODY”) and one for the outside (“OUT”). That meant an even bigger nest or something slightly smarter would be required.

After buying some parts from China via Ebay I came up with the rather obvious idea of eliminating the “nest” by assembling the components on a single board that would attach directly to the GPIO pins of the Raspberry PI. An informal sketch of the design follows:


The overall idea is based on a 4×6 cm prototype PCB board and a 2×13 Pin header soldered together in such a way that the board becomes a suitable add-on to the Raspberry PI.  The mini relay for switching the heater on/off is placed on the opposite end of the board relative to the pin header, along with its control circuitry. Between the two there is room for a “bus” for multiple DS18B20 temperature sensors. One is placed on the board, screw connectors are provided for the other sensors off the board. As the DS18B20 sensors are “1wire” devices, they can be identified in software via their unique serial numbers.

The board design thus allows both GPIO relay control + reading of 3 temperature sensors.


This looks like an improvement to me.  As designed, the board simply connects to the 28 GPIO pins of the Raspberry PI model B using the 2×13 Pin header. My friend is getting the Model B+ with more GPIO pins, but luckily the common pins of the two RPI models are the same, so the board should fit the B+ just fine.


A couple of quick tests showed that both the relay control and the temperature sensor worked just fine, so the board is fully operational and awaiting the Mark2 camera board heater element to be completed.

After completing the Mark2 heater board, I received some PCB Pin connector kits. They are possibly better alternatives than the screw connectors, since you do not risk getting the polarity wrong once the board has been properly made. Still, I am quite happy with the current Mark2 board!

Raspberry Pi Lens Heater

Happy New  Year!  Again it has taken some time since the last blog entry, due to Christmas activities taking priority. However, quite a few things have been happening with the Raspberry Pi weather camera in December.  Just over Christmas we had a cold snap, and I noticed that the camera images appeared degraded in the cold, they used to be much sharper before, didn’t they? I decided to check, here is a comparison for the same time of day, comparable weather, but different temperatures on 26. Dec (-15C) and 01. Jan (+3.8C).


You don’t have to be a rocket scientist to see that the image quality is degraded at lower ambient temperature. When comparing with older images from September 2014 when it was much warmer it is also clear that the relative sharpness of the 01.  Jan image is also degraded compared to the warmer September days.

I looked around to see if this was a known problem, and I did find a report of a very similar problem by someone in Germany (?), where problems with focus were correlated with cool temperatures.  More looking around landed me at a page with a lot of technical specifications for the Raspberry Pi. There is an interesting quote there that says “The threaded focus adjustment is set to infinity at the factory. Changing the focus from infinity to something closer requires that you turn the threaded lens cell counter-clockwise, moving the lens further away from the imaging sensor”  .

In other words, the Pi camera lens has fixed focus unless you start messing with it.  It is a so called Extended Depth Of Field (EDOF) lens, which essentially is using some clever optical and internal processing tricks to reach a pretty decent, sharp image without the user having to perform any focusing at all. Note that it is not an auto-focus system, the lens is fixed at all times. So it is a decent and user friendly compromise if you operate the camera within the design specifications.

Based on this, I suspected that there was an assumption of optimal temperature built into the PI camera EDOF system, for example an assumption room temperature (say, +20C).  My take would then be that low winter temperatures causes the lens assembly to contract/deform in such a way that the lens is effectively brought slightly closer to the imaging sensor, resulting in a “beyond infinity” focus setting, and thus rather blurred images in the cold.

This is bad news if you are using the PI camera for outdoor imaging at low temperatures, but is it possible to do something about it? Of course, one might consider changing the lens focus position by rotating the lens, but this is not practical in this case, considering the tiny lens and doing it in the cold.

However, heating the camera board to near room temperature is perhaps more feasible? From amateur astronomy we know dew heaters are made by coupling a number of resistors in parallel and sending a small amount of current through them, generating something like 2W in total to heat the optical surfaces and thus avoid dew.


We could perhaps make a similar system using just a few resistors and generate something like 0.5W to heat just the camera board?  The general idea is illustrated at left, using 3×150 Ohm resistors in parallel. If you apply 5V to this setup, it will generate 0.5 Watt. By placing the resistors close to the lens assembly, then much of the generated heat will be transferred to lens.  By also measuring the temperature close to the lens one can determine how long/much the lens should be heated and optionally turn it on/off automatically as required. To make this work,  one needs to insulate the resistors so they don’t lose the generated heat too fast. I decided to embed the resistors in melted plastic, as I had some hobby plastic that could be used. I made a simple form, put the resistor assembly in it, poured plastic over it and melted the plastic with a heat gun.  After cooling and adjustments I had a basic lens heater element,  shown below. The heat from the resistors will not dissipate as easily and will probably also cause a slightly more uniform heating around the lens.


I could have put a small DS18B20 temperature sensor into the melted plastic, but I was unsure about whether it would survive.  So instead I cut a trace for it after the plastic had cooled and glued in place between the red markings in the image below left. Before that I had soldered suitable wires to it. At the same time, holes were drilled to match the existing holes in the PI camera board. Here one needs to be accurate with the separation and placement of the holes.  In the end I used M2 machine screws and a piece of plastic on the back side of the camera board with similar holes to hold the assembly in place. The purpose of the plastic on the back is to insulate both electrically and also temperature-wise.


The DS18B20 temperature sensors are quite “friendly”, as you can connect many such sensors to the same wire/GPIO pin, they are so called “1-wire” sensors, although you also need wires for current. Once connected to the GPIO pins, the measurements turn up as small text files you can read.  The method I use for reading the temperature sensors is described here and  especially here . Since the weather camera images are scheduled using crontab, the driver for using 1-wire  sensors must be loaded at system start up, there is a page describing how to do that here.


In a previous post, I discussed how to control a relay, and now this could come into use.  Quite possibly, one does not want the heater to be on at all times. We have very variable temperatures during the winter, and when the sun is high in the summer it can get rather warm. So a system for control the applied heat is required. If a separate power supply is used and the socket is accessible, one may do it manually. But in this case, the socket is not conveniently located and I eventually want 100% automatic temperature control.

A simple solution to this is to use the relay board,  allowing the power to the lens heater to be controlled from the PI itself, even when the heater is powered from a separate power supply. Then it also becomes possible to automate the heater control, by evaluating the temperature sensor embedded in the heater. Typically, one may want to turn on the heater if the temperature drops below +10C and turn it off when it exceeds +20C, or something of that nature. Such a thing is possible to do from software,  using the relay.

Adapting a relay board inside the camera housing was not part of the original weather camera design. The PI was simply placed on a  85mmx123mm aluminium plate that slides into the tracks in the inside of the camera housing. To fit the relay, I found a piece of unused plastic that could serve as a relay board holder. Adapting the relay board required a couple of holes to be drilled, and checking that the final assembly still fit inside the housing.


There are probably far more elegant ways of connecting it all than what is shown below, it is a bit of a “bird’s nest” :-) But it shows where the temperature sensors are and also that the wires are connected to a 28 pin pin-header on the far side, so nothing is soldered directly to the GPIO Pins of the PI (Model B in this instance). If required, the whole thing can easily be disassembled.


In the image above, all wiring is done, except for the power to the heater and power to the PI itself.  Notice that he PI camera board + lens heater assembly is placed on the front side of the plate holding it. This is different from before, and places the camera closer to the front glass.

The PI is powered via a micro USB cable (black in the image below), and the lens heater is connected to the red/black power cable via the relay. The whole thing then slides inside the housing, using the tracks on the inside walls.


A likely improvement and simplification if I was to make it again, would be to integrate the plate the PI sits on with the actual holder of the camera/heater as one piece.  Another likely improvement would be to create a “breakout board” for both the relay and temperature sensors. That would eliminate the need for much of the messy wiring. So the current solution should be considered a prototype.

Below is the new front of the camera, now a combination of the old dew fix and the new lens heater. The lens heater in this configuration should also help to prevent dew, since the back side of the glass is now heated.


Initial test results

Once assembled, I was eager to test it. First step was to check that the camera still worked, and that it was possible to read both temperature sensors. The 1-wire sensors kept their promise and both showed data. Each sensor has a unique serial number, but you cannot say which is which by looking at the serial number, you have to observe the behaviour.  After some simple experimentation, I was able to determine the “body” sensor and the “cmos” sensor  and their respective serial numbers.

Then the real test began, by applying current to the lens heater. To switch the relay on/off, I used the C code found in the article Raspberry Pi – Driving a Relay using GPIO, i.e. the same method as in Raspberry Pi – Controlling a Relay.

After switching on the heater current, the “cmos” temperature sensor started to report higher values. Success! During the initial testing, the outside temperature was about +2C, and before heating began the cmos temperature sensor reported just over 5C. After switching on the 5V heater current, the temperature increased gradually over 45-60 minutes until it stabilised around +17C. This was pretty good!  If more power is needed, it is possible to run the heater at 6V or higher, one just needs to check that the power rating of each resistor is not exceeded, we don’t want anything to catch fire!

How about image quality? This is the best part, the sharpness is dramatically improved. At the time of writing, it is dark. But I will write a new post tomorrow,  comparing daylight images to previous images at the similar ambient temperature conditions.

To conclude, the heater works and it has the desired image quality effect!

fighting dew

I have not updated the blog for a while, because I have been working abroad (Singapore). During that time, the weather camera developed dew problems after a long period of massive rain.  The camera house is not 100% water proof, and the variation on outside air humidity causes dew to settle on the inside walls. Another problem was the big front glass that was exposed to the cold environment, it became the coldest “wall” as seen from inside, and therefore the most likely place for dew to settle, precisely where you don’t want it. The result is seen below, lots of dew causing fogging of the image!


The camera chip & lens cannot be seen in that image, but it is in the centre between the fake LED components (the camera house is from a fake surveillance camera). The fake LEDs site on a dummy PCB that is also part of the problem: it make internal ventilation problematic, increasing the dew problems.

As a first step in fixing the problem, the camera was opened and all the fake LEDs were removed. After all, their only “purpose” was to cause strange reflections of the Sun. Next, 4 big holes were drilled in the fake PCB, so that air could be freely ventilated between the space between the camera and front glass, and the rest of the camera house, where the Raspberry PI generates some heat. 


Measurements have shown that the temperature inside the camera house is generally about 6 degrees C higher than the outside air.  This looks like an improvement, but the front glass is still much to large and exposed to the environment. How about masking off the part of the glass that the camera isn’t looking through? I did some indoor tests with a piece of cardboard to determine the size of the area that needs to be clear of obstructions, and the result is shown below. Most of the glass can be covered without obstructing the view from the Raspberry Pi camera lens.


The thinking is to cover and insulate the the glass plate on the outside so the limited heat generated inside the camera house has a smaller surface to “work on”, i.e. just the inside of the unobstructed circular glass area exposed to the environment, right in front of the lens. 


The final result was as show above: A piece of Plexiglas with the same size and shape as the cardboard plate (slightly bigger hole) with a 1cm thick foam plate as insulation towards the glass. The assembly is then attached to the front using water proof tape.

The resulting camera performance can be observed on the weather station page .

My corner of the universe…