How to contact us?

Decide if it is:

  • a help question (how do I do X?): read the documentation first. If that doesn't solve it, use the "Forums".
  • an issue with the server infrastructure at Berkeley: send an email to with as much detail as possible.
  • really a software bug: Read this first. Then check whether the issue has already been reported. Otherwise, create a 'New issue'.
  • general discussion that does not fit any of the above: send an email to This goes out to a sufficiently large number of people, so we tend to keep the discussions to a high-level to limit the volume.

How can I get more debugging information?

Often, it is useful to get some debug output when running programs that link against the GDP library, to see what's going on under the covers. This includes log-servers, applications (such as gdp-reader, gdp-writer, gcl-create, etc), or even scripts in higher level languages (such as Python). The GDP C-library supports printing out the debugging information when certain debug flags are setup. (Also see

Compiled C programs included in the GDP source

For most of the binaries shipped as part of debian packages or compiled from source, you can pass a command line argument -D k=v, where k is the particular subsystem that you'd like to debug and v is the verbosity level. If you don't know what to put for k, just use * to get all subsystems. v is usually an integer, a good value of v is 20, but you can of course use smaller/larger values based on your needs.

Writing your own C programs and linking against the C library

If you are writing your own C programs and linking against the provided libraries, the function to look out for is ep_dbg_set. This function takes a single const char * argument of the form k=v. The binaries mentioned above just parse the command line flag -D and pass the arguments to ep_dbg_set. See source:apps/gdp-writer.c for example usage and source:ep/ep_dbg.h for more details.

Writing your own programs in other languages

This is dependent on the language. For example, if you are writing your Python programs using the Python API, you can use gdp.dbg_set in your programs after calling gdp.gdp_init().

I get "ERROR: 600 no route available [Berkeley:Swarm-GDP:600]"

This is an error returned by the routing layer, and means that your client program was unable to find the name that you requested. There can be a number of reasons for this, including:

  • Typos in the name
  • The router you are talking to can not find the name, because it can not find a route to the log-server/log that you are trying to talk to. This can further be triggered by:
    • Network failures, or
    • Router bugs. Yes, we don't have a very stable router yet.
  • The log-server that you want to talk to just went away, because:
    • The log server is buggy, or
    • The router kicked it out for some reason.

If you are sure that the name is correct, and you are using the server infrastructure at Berkeley, please send an email to with some details of the log/log-server you are trying to access.

How do I remove the PEM pass phrase from my private keys after I have already created the logs? I need this to automate my other application that I am building?

The PEM pass phrase is an extra layer of protection around your private keys stored on disk.

To verify if there is a pass phrase protecting your private keys (*.pem files), it should have the following header and footer:


To remove the passphrase (assuming your key is an ECDSA key, which is the default for gcl-create), you need the openssl command line tool (you might have it already). Assuming your encrypted input key is
called IN.pem, and you'd like the output key to be stored as OUT.pem, you can do something like:

openssl ec -in IN.pem -out OUT.pem -outform PEM

Note that in order to automatically find the appropriate key, it should be named the same as the printable name of the log (e.g. xGkovIfeI7movSNaof5nzLFULIOJrT8Cv0q1z_UG6Y8.pem), so usually this is followed by renaming the output PEM file to the appropriate name.

I get 'ERROR: 405 method not allowed' when I attempt to read a record by timestamp. What am I doing wrong?

Starting with version 0.7.0, you can read records by a timestamp, rather than a record number. However, there is a catch: logs that were created before this feature do not have a timestamp=>record number index. This results in ERROR: 405 method not allowed, which means that you can not query this particular log by timestamp... unless someone creates an index for this specific log. Index creation from scratch for old logs as an online process in the log-server will be unwise, since the logs could be arbitrarily large. Hence, a way of rebuilding log indices is needed (#46).

For the server infrastructure at Berkeley, we have quite a few logs that were created before version 0.7.0 was released. At some point, this needs to be fixed. Watch #46 for updates on this.

I am an application developer and for some reason, I would like to use my own infrastructure. How do I go about it?

Here are the two ways to do this:

Quick and easy, but only for development:

If you are running linux on x86-based systems (tested on ubuntu 14.04), you can use a docker setup (requires bridge-utils and docker) to have your own log-server and router. Here's how to use it: source:adm/docker-net/ Note that this only talks about setting up a log-server and the router. You still have to tell your applications to use the local router/log-server.

The advantages of this approach are:

  • Very easy to setup, everything is almost already packaged for you.
  • You can write your application and do all the testing with the latest and the greatest code, and later use a production server and be sure that your applications will still run.
  • You can very easily reset your entire infrastructure by simply doing 'make clean'.
  • You can have multiple instances of log-servers/routers, and everything is contained in the docker images (including all the data).
  • It uses the relatively more stable, but less functional python based router.

However, the downsides are:

  • You are not connected to the rest of the environment.
  • Applications running on a different machine can not connect to your log-servers/routers (unless you are willing to go through the effort of tunneling everything).
  • Any data that you store is only available for the life-time of your docker container.
  • Application performance will be different when you use an actual log-server/router after going through the variable network.

The hard way, running the router and the log-server as standalone processes:

You can, of course, run your own log-server and router by either compiling from source or using pre-packaged debian binaries. It is up to you to decide whether you'd like to connect to the infrastructure that we run at Berkeley or not. However, this means you are assuming the responsibilities (and skill-set) of a system-administrator. In order to get started, acquire the latest gdp-server and click gdp-router debian packages and install them. See the individual projects for installation instructions.


  • You can be connected to the rest of the environment.
  • You can have applications spread across various devices/computers that use your local infrastructure.
  • You can have persistent storage of data.


  • We create debian packages only occasionally (when we do a software release).
  • You need tweaking of various configuration files and settings, which can be very specific to your particular environment.
  • If you are providing the infrastructure as a service to your local users, you need to ensure better server up-time and availability.
  • It uses the click router, which has historically been a little unstable (but more functional) compared to the python router.