You are here

FreeBSD and Linux

Installing sympa on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoImage

Let's install sympa with postfix on a FreeBSD jail.

Install sympa

Install the port:

# make -C /usr/ports/mail/sympa install clean

Make sure the syslog entries of sympa are stored into a separate log file:

# mkdir -p /usr/local/etc/syslog.d
# echo "local1.*       /var/log/sympa" > /usr/local/etc/syslog.d/sympa
# service syslogd reload

Make sure sympa is allowed to write into its data and bounce directory:

# chown root:sympa /usr/local/share/sympa/list_data
# chmod 770 /usr/local/share/sympa/list_data
# chown root:sympa /usr/local/share/sympa/bounce
# chmod 770 /usr/local/share/sympa/bounce

Make sure sympa is allowed to write into its config directory:

# chown root:sympa /usr/local/etc/sympa
# chmod 770 /usr/local/etc/sympa
# chown root:sympa /usr/local/etc/sympa/sympa.conf

In my opinion, this is plain stupid. Running software should not be able to edit its config files.

Make sure sympa is allowed to write into its shared directory:

# chown root:sympa /usr/local/share/sympa
# chmod 705 /usr/local/share/sympa

Also, I think this is bad practice.

Make sure sympa is allowed to write into the system mail config directory:

# chown root:sympa /etc/mail
# chmod 770 /etc/mail

Do I like this? Absolutely not.
In particular, sympa needs to write into /etc/mail/aliases.db. If that file already exists on your system, give it a good chown as well.

Edit /usr/local/etc/sympa/sympa.conf to your need. Here's a basic configuration:

domain  sympa.example.com
listmaster  admin@example.com
wwsympa_url http://sympa.example.com/wws
cookie add69524c8a75f47c05ccb47f96c9195
db_type mysql
db_host 192.168.0.2
db_name sympa
db_user sympa
db_passwd Ujzz6YVdEXppajM
static_content_path /usr/local/share/sympa
static_content_url /static-sympa
sendmail_aliases /usr/local/share/sympa/list_data/sympa_aliases

Configuring the database of sympa is out of the scope of this article. Here I'm simply using MySQL.

Enable the service:

# sysrc sympa_enable="YES"

Start the service:

# service sympa start

Installing and configuring postfix

Install postfix:

# Make -C /usr/ports/mail/postfix install clean

Edit the main config (/usr/local/etc/postfix/main.cf) and configure a transport and a map file for sympa:

myhostname = sympa.example.com
mydomain = sympa.example.com
mydestination = localhost, sympa.example.com
recipient_delimiter = +
mailbox_size_limit = 0
transport_maps = regexp:/usr/local/etc/postfix/transport_regexp_sympa
sympa_destination_recipient_limit = 1
sympabounce_destination_recipient_limit = 1
alias_maps = hash:/etc/mail/aliases,hash:/usr/local/share/sympa/list_data/sympa_aliases
alias_database = hash:/etc/mail/aliases,hash:/usr/local/share/sympa/list_data/sympa_aliases

/usr/local/etc/postfix/transport_regexp_sympa:

/^.*\-owner@sympa\.example\.com$/ sympabounce:
/^.*\@sympa\.example\.com$/       sympa:

Edit the transport file (/usr/local/etc/postfix/master.cf) and add transports to sympa:

sympa   unix        -   n   n   -   -   pipe
    flags=R user=sympa argv=/usr/local/libexec/sympa/queue ${recipient}
sympabounce unix    -   n   n   -   -   pipe
    flags=R user=sympa argv=/usr/local/libexec/sympa/bouncequeue ${recipient}

Enable and start postfix:

# sysrc postfix_enable="YES"
# service postfix start

Installing the web interface (wwsympa)

Install dependencies:

# make -C /usr/ports/databases/p5-DBD-mysql install clean
# make -C /usr/ports/www/p5-CGI-Fast install clean

Install nginx and fcgiwrap:

# make -C /usr/ports/www/nginx install clean
# make -C /usr/ports/www/fcgiwrap install clean

Configure a profile in /etc/rc.conf and enable the service:

fcgiwrap_enable="YES"
fcgiwrap_profiles="sympa"
fcgiwrap_sympa_socket="unix:/tmp/fcgiwrap_sympa.sock"
fcgiwrap_sympa_user="sympa"
fcgiwrap_sympa_socket_owner="sympa"
fcgiwrap_sympa_socket_group="www"
fcgiwrap_sympa_socket_mode="0770"

Start the service and check that it's listening correctly:

# service fcgiwrap start
# sockstat -l | grep fcgiwrap
sympa    fcgiwrap   79058 0  stream /tmp/fcgiwrap_sympa.sock

Add a server to the configuration files of nginx:

server {
   listen 80;
   server_name sympa.example.com;
   access_log /var/log/nginx/sympa-access.log;
   error_log /var/log/nginx/sympa-error.log;
   index wws/;
   location /static-sympa {
       alias /usr/local/share/sympa;
       access_log off;
   }

   location / {
     gzip off;
     fastcgi_pass   unix:/tmp/fcgiwrap_sympa.sock;
     fastcgi_split_path_info ^(/wws)(.+)$;
     fastcgi_param  QUERY_STRING       $query_string;
     fastcgi_param  REQUEST_METHOD     $request_method;
     fastcgi_param  CONTENT_TYPE       $content_type;
     fastcgi_param  CONTENT_LENGTH     $content_length;
     fastcgi_param  PATH_INFO          $fastcgi_path_info;
     fastcgi_param  SCRIPT_NAME        $fastcgi_script_name;
     fastcgi_param  REQUEST_URI        $request_uri;
     fastcgi_param  DOCUMENT_URI       $document_uri;
     fastcgi_param  DOCUMENT_ROOT      $document_root;
     fastcgi_param  SERVER_PROTOCOL    $server_protocol;
     fastcgi_param  GATEWAY_INTERFACE  CGI/1.1;
     fastcgi_param  SERVER_SOFTWARE    nginx;
     fastcgi_param  REMOTE_ADDR        $remote_addr;
     fastcgi_param  REMOTE_PORT        $remote_port;
     fastcgi_param  SERVER_ADDR        $server_addr;
     fastcgi_param  SERVER_PORT        $server_port;
     fastcgi_param  SERVER_NAME        $server_name;
     fastcgi_param  REMOTE_USER        $remote_user;
     fastcgi_param  SCRIPT_FILENAME    /usr/local/libexec/sympa/wwsympa-wrapper.fcgi;
     fastcgi_param  HTTP_HOST          sympa.example.com;
     fastcgi_intercept_errors on;
   }
}

Enable and start nginx:

# sysrc nginx_enable="YES"
# service nginx start

The web interface is now available.
WWSympa running on FreeBSD

If everything went well, you can now create a list:

WWSympa showing a newly created list

You can check that the alias file was updated correctly:

# cat /usr/local/share/sympa/list_data/sympa_aliases
## This aliases file is dedicated to Sympa Mailing List Manager
## You should edit your sendmail.mc or sendmail.cf file to declare it
#------------------------------ test_list: list alias created 22 Aug 2017
test_list: "| /usr/local/libexec/sympa/queue test_list@sympa.example.com"
test_list-request: "| /usr/local/libexec/sympa/queue test_list-request@sympa.example.com"
test_list-editor: "| /usr/local/libexec/sympa/queue test_list-editor@sympa.example.com"
#test_list-subscribe: "| /usr/local/libexec/sympa/queue test_list-subscribe@sympa.example.com"
test_list-unsubscribe: "| /usr/local/libexec/sympa/queue test_list-unsubscribe@sympa.example.com"
test_list-owner: "| /usr/local/libexec/sympa/bouncequeue test_list@sympa.example.com"

Test everything

Send a test email:

echo "Est-elle brune, blonde ou rousse ? - Je l'ignore." | mail -s "Test email" test_list@sympa.example.com

Check /var/log/maillog:

Dec  7 21:49:16 sympa1 postfix/qmgr[47576]: 987058465: from=<user1@example.com>, size=1378, nrcpt=1 (queue active)
Dec  7 21:49:17 sympa1 postfix/pipe[76920]: 987058465: to=<test_list@sympa.example.com>, relay=sympa, delay=1.1, delays=0.02/0/0/1.1, dsn=2.0.0, status=sent (delivered via sympa service)
Dec  7 21:49:17 sympa1 postfix/qmgr[47576]: 987058465: removed
Dec  7 21:49:17 sympa1 postfix/smtpd[76916]: connect from unknown[10.9.0.101]
Dec  7 21:49:18 sympa1 postfix/qmgr[47576]: 1119E8478: from=<test_list-owner@sympa.example.com>, size=2274, nrcpt=1 (queue active)
Dec  7 21:49:18 sympa1 postfix/smtp[76926]: 1119E8478: to=<user2@example.com>, relay=mail1.example.net[198.51.100.35]:25, delay=0.77, delays=0.32/0.01/0.08/0.36, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 7E4E45EA7)
Dec  7 21:49:18 sympa1 postfix/qmgr[47576]: 1119E8478: removed

Check /var/log/messages:

Dec 07 21:49:16 sympa1 sympa_msg[47458]: notice Sympa::Spindle::ProcessIncoming::_twist() Processing Sympa::Message <test_list@sympa.example.com.1512683356.76921>; envelope_sender=user1@example.com; message_id=eme4db733a-c660-45fd-bdf5-37968cdbdc1e@rasa; sender=user1@example.com
Dec 07 21:49:16 sympa1 sympa_msg[47458]: notice Sympa::Spool::store() Sympa::Message <test_list@sympa.example.com.1512683356.76921> is stored into Sympa::Spool::Archive as <1512683356.1512683356.964608.test_list@sympa.example.com,47458,8945>
Dec 07 21:49:16 sympa1 sympa_msg[47458]: notice Sympa::Spool::store() Sympa::Message <test_list@sympa.example.com.1512683356.76921> is stored into Sympa::Spool::Digest <test_list@sympa.example.com> as <1512683356.1512683356.978756,47458,7735>
Dec 07 21:49:16 sympa1 sympa_msg[47458]: notice Sympa::Bulk::store() Message Sympa::Message <test_list@sympa.example.com.1512683356.76921> is stored into bulk spool as <5.5.1512683356.1512683356.985672.test_list@sympa.example.com_z,47458,6615>
Dec 07 21:49:16 sympa1 sympa_msg[47458]: notice Sympa::Spindle::ToList::_send_msg() No VERP subscribers left to distribute message to list Sympa::List <test_list@sympa.example.com>
Dec 07 21:49:18 sympa1 bulk[47461]: notice Sympa::Mailer::store() Done sending message Sympa::Message <5.5.1512683356.1512683356.985672.test_list@sympa.example.com_z,47458,6615/z> for Sympa::List <test_list@sympa.example.com> (priority 5) in 2 seconds since scheduled expedition date

Running Jabber/XMPP client Kaiwa on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoImage
Let's install Kaiwa on FreeBSD.

Installing Kaiwa

Install nodejs:

# make -C /usr/ports/www/node install clean

Install npm:

# make -C /usr/ports/www/npm install clean

Create a directory to host the code and go there:

# mkdir /usr/local/www/kaiwa
# cd /usr/local/www/kaiwa

Clone the repository:

# git clone https://github.com/digicoop/kaiwa.git .

If you don't have git, you can download the zip file and extract it.

Install the dependencies:

# npm install

Configuring ejabberd

My Jabber/XMPP server is ejabberd.

Here's the relevant part: we must ensure there's a path wired to handler ejabberd_http_ws in /usr/local/etc/ejabberd/ejabberd.yml.

  ##
  ## To handle XML-RPC requests that provide admin credentials:
  ##
  ## -
  ##   port: 4560
  ##   module: ejabberd_xmlrpc
  -
    port: 5280
    module: ejabberd_http
    ## request_handlers:
    ##   "/pub/archive": mod_http_fileserver
    request_handlers:
      "/websocket": ejabberd_http_ws
    web_admin: true
    http_poll: true
    http_bind: true
    ## register: true
    captcha: true

Make sure port 5280 is open in your firewall.

Configure Kaiwa

Copy the example config file:

# cp dev_config.example.json dev_config.json

Here's what I put in there:

{
    "isDev": true,
    "http": {
        "host": "localhost",
        "port": 8000
    },
    "session": {
        "secret": "wSPwBucqnCY4JHEENMY6NM4UsfycNz"
    },
    "server": {
        "name": "Example",
        "domain": "example.com",
        "wss": "ws://example.com:5280/websocket/",
        "muc": "",
        "startup": "groupchat/example%40chat.example.com",
        "admin": "admin"
    }
}

Running Kaiwa

Create a user to run the service:

# pw useradd kaiwa

Allow the user to feel at home in the code directory:

# chown -R kaiwa:kaiwa /usr/local/www/kaiwa

You can check if everything is working by running the service manually:

# sudo -u kaiwa node server

If everything is fine, install forever to make sure the service is always running:

# npm install forever -g

Create the logs, give them to the right user and configure newsyslog to rotate them:

# touch /var/log/kaiwa.log
# touch /var/log/kaiwa-error.log
# touch /var/log/kaiwa-forever.log
# chown kaiwa:kaiwa /var/log/kaiwa*

echo "/var/log/kaiwa.log kaiwa:kaiwa 640 3 100 * JC /var/run/kaiwa.pid" >> /etc/newsyslog.con
echo "/var/log/kaiwa-error.log kaiwa:kaiwa 640 3 100 * JC /var/run/kaiwa.pid" >> /etc/newsyslog.con
echo "/var/log/kaiwa-forever.log kaiwa:kaiwa 640 3 100 * JC /var/run/kaiwa.pid" >> /etc/newsyslog.con

For thing for the pid file location:

# mkdir /var/run/kaiwa
# chown kaiwa:kaiwa /var/run/kaiwa

Create a rc script in /usr/local/etc/rc.d/kaiwa and make sure it's executable.

#!/bin/sh

# In addition to kaiwa_enable, the following rc variables should be defined:

# kaiwa_msg      The name of your program, printed at start. Defaults to "kaiwa".
# kaiwa_dir      The directory where your node files live. Must be defined.
# kaiwa_logdir   The directory for logfiles. Defaults to ${kaiwa_dir}/logs.
# kaiwa_user     Sudoed before running. Defaults to "kaiwa".
# kaiwa_app      Application main script. Defaults to "server.js" (relative
#               to kaiwa_user's home
# kaiwa_forever  forever binary file path. Defaults to "/usr/local/bin/forever".
# kaiwa_local_forever    use local forever binary
#               (ie. kaiwa_user's home/kaiwa_modules/.bin/forever)
# kaiwa_forever_log      forever log file. Defaults to /var/log/forever.log.

# PROVIDE: kaiwa
# REQUIRE: LOGIN
# KEYWORD: shutdown

. /etc/rc.subr

name="kaiwa"
rcvar="${name}_enable"

start_precmd="${name}_prestart"
start_cmd="${name}_start"
stop_cmd="${name}_stop"

# kaiwa executable
command="/usr/local/bin/forever"
pidfile="/var/run/${name}/${name}.pid"

# forever needs a path for each command
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/games:/usr/local/sbin:/usr/local/bin:/root/bin

# get rc vars
load_rc_config $name
: ${kaiwa_enable:="no"}
: ${kaiwa_msg:="kaiwa"}
: ${kaiwa_logdir:="/var/log"}
: ${kaiwa_user:="kaiwa"}
: ${kaiwa_app:="server.js"}
: ${kaiwa_forever:="/usr/local/bin/forever"}
: ${kaiwa_local_forever:="no"}
: ${kaiwa_forever_log:="/var/log/kaiwa-forever.log"}

case ${kaiwa_local_forever} in
    [Yy][Ee][Ss])
        kaiwa_forever="/usr/local/www/kaiwa/node_modules/.bin/forever"
        ;;
    *)
        ;;
esac


# make sure we're pointing to the right place
required_dirs="${kaiwa_dir}"
required_files="${kaiwa_dir}/${kaiwa_app}"

# any other checks go here
kaiwa_prestart()
{
    echo "$kaiwa_msg starting"
}

kaiwa_start()
{
    ${kaiwa_forever} start -a -l ${kaiwa_forever_log} -o ${kaiwa_logdir}/kaiwa.log \
        -e ${kaiwa_logdir}/kaiwa-error.log --minUpTime 3000 --pidFile ${pidfile} \
        --workingDir ${kaiwa_dir} ${kaiwa_dir}/${kaiwa_app}
}

kaiwa_stop()
{
    # kill kaiwa nicely -- node should catch this signal gracefully
    ${kaiwa_forever} stop --killSignal SIGTERM `cat ${pidfile}`
}

run_rc_command "$1"

Enable the daemon in /etc/rc.conf:

kaiwa_enable="YES"
kaiwa_dir="/usr/local/www/kaiwa"

Start Kaiwa and enjoy

Start the service:

# service kaiwa start

Check that the daemon is listening:

# sockstat -4l | grep kaiwa
kaiwa node       54296 11 tcp4   10.2.0.141:8000       *:*

Image

Building a continuous-integration Android build server on FreeBSD: Part three: configuring Jenkins for on-demand builds

Not so frequently asked questions and stuff: 

The FreeBSD logoImageImage

In this series of blog posts, we're going to create a Android build server for continuous integration on FreeBSD.

  • Part one will explain how to build Android APKs using Gradle on FreeBSD using the Linux emulation.
  • Part two will explain how to configure Gitlab-CI to be able to run builds automatically for each commit.
  • Part three (this post) will explain how to configure Jenkins to be able to run builds and email the APKs to people.

Requirements

We want people from our project to be able to build APKs of a Android app and get them by email once they're built. People should be able to configure their app with some Jenkins parameters.

starting_a_jenkins_build_to_build_an_android_app_on_a_remote_freebsd_server

This post does not explain the basics of Jenkins. You should read the documentation if it's your first time using it.

You will need the following plugins:

Creating the node

In this scenario, I'm going to assume our FreeBSD build server is not on the same host as the Jenkins.

In consequence, we need to link the two systems by installing a Jenkins slave-node on the build system.

Preparing the server

On your Android build system, create a new user:

# pw group add -n jenkins-slave
# pw user add -n jenkins-slave -g jenkins-slave

Create a SSH key pair:

# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/jenkins-slave/.ssh/id_rsa): jenkins-slave-id_rsa
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in jenkins-slave-id_rsa.
Your public key has been saved in jenkins-slave-id_rsa.pub.

Add the public key to your SSH's authorized_keys. That will be how Jenkins connect to the slave.

# mkdir ~/.ssh/
# cat jenkins-slave-id_rsa.pub >> ~/.ssh/authorized_keys

Create the node on Jenkins

Go to "Manage Jenkins" -> "Manage credentials", and add a new "SSH Username with private key".
Put the private key you generated earlier there and configure the rest.

You used SSH keys, don't you?

Go to "Manage Jenkins" -> "Manage Nodes" and create a new node.

Configure the node with the right host, port and home directory. Set the location of the java executable, or Jenkins will only try to find it in /bin or /usr/bin and it won't work.
That won't connect. Did you forget about your firewall?

If everything goes well, the master node should connect to the server, install its JAR and start the slave node.

Nodes! More nodes!

# ls
jenkins-slave-pepper-id_rsa     jenkins-slave-pepper-id_rsa.pub slave.jar                       workspace

# ps -U jenkins-slave
  PID TT  STAT     TIME COMMAND
66474  -  IJ    0:08.67 sshd: jenkins-slave@notty (sshd)
66478  -  IsJ  11:51.54 /usr/local/openjdk8/bin/java -jar slave.jar

Configuring the job

Create a new job, give it a nice name, and start the configuration.

The parameters

Click on "This build is parameterized", and add as many parameters as you want. You will be able to use these parameters as Jenkins variable everywhere later in the build.

I have another one with, like, 10 different parameters

Here I have two parameters:

  • One to specify what kind of build I want: debug, staging, production, etc.
  • One that specify where I want to APK to be sent once it's built.

Fetching the code

Configure Jenkins to fetch the source code of the app where it's located.

Gimme the code!
Here I'll fetch it from git.

Building the app

Add the required environment variables to the build.

This is a good environment, don't you think?

Add a gradle build script, and invoke the tasks to build your app. Here you can use the parameters you set up at the beginning of the config.

You can build without gradle if you want

You can build without the gradle plugin if you want. It only displays the build nicer but is strictly non essential.

Email the APKs once they're built

Add a post build step: "Editable Email Notification".

Put the recipient chosen by the user from the parameters into the list of recipients, customize the sender, reply-to, subject, content, etc.

Add the APK files as attachments.

Don't forget to configure the triggers to send the email if the build succeeds. By default, emails are only sent on failure.

Such a complicated configuration

Testing

Start a build. If everything goes well, you should receive the resulting APK by email a few seconds after the build is done.

jenkins_build_is_successfull_and_android_apk_was_build_and_sent_by_email

Building a continuous-integration Android build server on FreeBSD: Part one: building APKs using Linux emulation

Not so frequently asked questions and stuff: 

ImageThe FreeBSD logo

In this series of blog posts, we're going to create a Android build server for continuous integration on FreeBSD.

  • Part one (this post) will explain how to build Android APKs using Gradle on FreeBSD using the Linux emulation.
  • Part two will explain how to configure Gitlab-CI to be able to run builds automatically for each commit.
  • Part three will explain how to configure Jenkins to be able to run builds and email the APKs to people.

I'll be using a normal 10.2-RELEASE:

# uname -a
FreeBSD androidbuild 10.2-RELEASE-p11 FreeBSD 10.2-RELEASE-p11 #0: Wed Jan 27 15:56:01 CET 2016     root@androidbuild:/usr/obj/usr/src/sys/GENERIC  amd64

Parts of this posts are duplicate from my old post on how to build APKs on FreeBSD.

Step 1: installing the system dependencies

We're going to install many packages, including a lot of GNU-style one. So if you care about having your systems cleans, I suggest your do all this in a jail.

Install gradle:

# make -C /usr/ports/devel/gradle/ install clean

Go drink the longest coffee you can brew. That stuff installs a lot of dependencies.

Install bash and link it to /bin/bash:

# make -C /usr/ports/shells/bash install clean
# ln -s /usr/local/bin/bash /bin/bash

Install git:

# make -C /usr/ports/devel/git install clean

Install python:

# make -C /usr/ports/lang/python install clean

Create a nice user to execute the builds:

# pw useradd androidbuild
# mkdir /home/androidbuild
# chown androidbuild:androidbuild /home/androidbuild

Load FreeBSD's Linux emulation system and install a base CentOS 6 system:

# kldload linux
# cd /usr/ports/emulators/linux_base-c6 && make install distclean

The latest version of the build tools (24.0.0) use 64bit binaries and libraries. If you want to be able to build your apks with them, you'll also need the 64bit Linux emulation. In that case, you must be running a FreeBSD version >= 10.3. In that case, you should do instead:

# kldload linux
# kldload linux64
echo "OVERRIDE_LINUX_BASE_PORT=c6_64" >> /etc/make.conf
echo "OVERRIDE_LINUX_NONBASE_PORTS=c6_64" >> /etc/make.conf
# cd /usr/ports/emulators/linux_base-c6 && make install distclean

If you alread had the 32 bit CentOS base, uninstall the port, remove the /compat/linux files and reinstall the port again after setting the two entries in make.conf.

Step 2: setting-up the Android SDK

Download the SDK

Fetch and extract the latest version of the Linux SDK:

% fetch 'https://dl.google.com/android/android-sdk_r24.4.1-linux.tgz'
% tar xzf android-sdk_r24.4.1-linux.tgz
% setenv ANDROID_HOME /home/androidbuild/android-sdk-linux/

Patch the SDK

As it is now, the SDK will download build tools for Windows, since it obviously won't recognize our FreeBSD.
Since we're going to use the Linux emulation, we need to path the system so that it downloads Linux binaries:

Download the source of the SDK base:

% git clone https://android.googlesource.com/platform/tools/base

Apply the following patch:

diff -r -u a/common/src/main/java/com/android/SdkConstants.java b/common/src/main/java/com/android/SdkConstants.java
--- a/common/src/main/java/com/android/SdkConstants.java        2016-09-06 07:56:56.325948102 +0000
+++ b/common/src/main/java/com/android/SdkConstants.java        2016-09-06 07:58:10.721944140 +0000
@@ -635,6 +635,8 @@
             return PLATFORM_WINDOWS;
         } else if (os.startsWith("Linux")) {                //$NON-NLS-1$
             return PLATFORM_LINUX;
+        } else if (os.startsWith("FreeBSD")) {              //$NON-NLS-1$
+            return PLATFORM_LINUX;
         }

         return PLATFORM_UNKNOWN;
diff -r -u a/sdklib/src/main/java/com/android/sdklib/internal/repository/archives/ArchFilter.java b/sdklib/src/main/java/com/android/sdklib/internal/repository/archives/ArchFilter.java
--- a/sdklib/src/main/java/com/android/sdklib/internal/repository/archives/ArchFilter.java      2016-09-06 07:56:56.828948347 +0000
+++ b/sdklib/src/main/java/com/android/sdklib/internal/repository/archives/ArchFilter.java      2016-09-06 07:58:35.160941890 +0000
@@ -216,6 +216,8 @@
             hostOS = HostOs.WINDOWS;
         } else if (os.startsWith("Linux")) {                //$NON-NLS-1$
             hostOS = HostOs.LINUX;
+        } else if (os.startsWith("FreeBSD")) {                //$NON-NLS-1$
+            hostOS = HostOs.LINUX;
         }

         BitSize jvmBits;

Rebuild the patched files.

% javac common/src/main/java/com/android/SdkConstants.java
% javac sdklib/src/main/java/com/android/sdklib/internal/repository/archives/ArchFilter.java -cp "sdklib/src/main/java:common/src/main/java:annotations/src/main/java"

Replace the files inside the jar.

% cd sdklib/src/main/java/ && jar uf ${ANDROID_HOME}/tools/lib/sdklib.jar com/android/sdklib/internal/repository/archives/ArchFilter.class
% cd common/src/main/java && jar uf ${ANDROID_HOME}/tools/lib/common.jar com/android/SdkConstants.class

See this patch on the Android website: https://android-review.googlesource.com/#/c/100271/

Download the SDK packages and set-up the build tools

Go to the tool directory:

% cd ${ANDROID_HOME}/tools/

In the example, we're going to build an APK thats uses API version 23.
Let's download the right packages:

% ./android list sdk -u -a
[...]
7- Android SDK Build-tools, revision 23.0.3
31- SDK Platform Android 6.0, API 23, revision 3
119- Google APIs, Android API 23, revision 1
% ./android update sdk -u -a -t tools,platform-tools,7,31,119

Let's find the Linux binaries in the build tools, and brand them as such:

% find build-tools/23.0.3/ -maxdepth 1 -type f -print0 | xargs -0 -n 10 file | grep "ELF"
build-tools/23.0.3/mipsel-linux-android-ld:  ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.8, stripped
build-tools/23.0.3/arm-linux-androideabi-ld: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.8, stripped
build-tools/23.0.3/llvm-rs-cc:               ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=8a4ffbc0e197147c4e968722e995605e1d06ea88, not stripped
build-tools/23.0.3/i686-linux-android-ld:    ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.8, stripped
build-tools/23.0.3/bcc_compat:               ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=d565d03b7bafd03d335bdd246832bb31c7cca527, not stripped
build-tools/23.0.3/aapt:                     ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=cfb63b4ad11d0c2d59f10329f0116706e99bf72e, not stripped
build-tools/23.0.3/aidl:                     ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=3cbd3d742040d61877a3c6544778bf4701b2f62d, not stripped
build-tools/23.0.3/aarch64-linux-android-ld: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.8, stripped
build-tools/23.0.3/split-select:             ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=b3bfb153d0ffaef6ed84c316ff682381ba8d75b2, not stripped
build-tools/23.0.3/dexdump:                  ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=a678a8163a2483107d285ffdc94fdde0d4fb2178, not stripped
build-tools/23.0.3/zipalign:                 ELF 32-bit LSB shared object, Intel 80386, version 1 (GNU/Linux), dynamically linked, interpreter /lib/ld-linux.so.2, for GNU/Linux 2.6.24, BuildID[sha1]=7d64216663df9fd3b4048952d095fbd07cb4284f, not stripped

% find build-tools/24.0.3/ -maxdepth 1 -type f -print0 | xargs -0 -n 10 file | grep "ELF" | awk 'BEGIN { FS = ":" } ; {print $1}' | xargs brandelf -t Linux
% find build-tools/24.0.3/ -maxdepth 1 -type f -print0 | xargs -0 -n 10 file | grep "ELF" | awk 'BEGIN { FS = ":" } ; {print $1}' | xargs chmod +x

The SDK is now configured and we should be able to build our apps there.

Step 3: building the APK

Fetch the sources of your app:

% git clone 'https://git.example.org/android-app'
% cd android-app

Be sure to have set environment variable ANDROID_HOME to the SDK location.
Start gradle and see if it complains.

% gradle tasks
------------------------------------------------------------
All tasks runnable from root project
------------------------------------------------------------

Android tasks
-------------
androidDependencies - Displays the Android dependencies of the project.
signingReport - Displays the signing info for each variant.
sourceSets - Prints out all the source sets defined in this project.
[...]
To see all tasks and more detail, run gradle tasks --all

To see more detail about a task, run gradle help --task <task>

BUILD SUCCESSFUL

Total time: 14.839 secs

Gradle is happy, let's try a build:

% gradle assembleRelease
[...]
BUILD SUCCESSFUL

Total time: 2 mins 24.784 secs

Let's look at the build output:

% ls Truc/build/outputs/apk/
truc-app-release-unaligned.apk   truc-app-release.apk

Success!

How to stop the watchdog timer of a BeableBone Black running Linux

Not so frequently asked questions and stuff: 

A Beable Bone Black with an operator setting its watchdog timer

The BeagleBone Black's SoC (AM335x) includes a watchdog timer, that will reset the whole board is it isn't pingged regularly.

Let's see if we can stop that thing running the latest Debian GNU/Linux to date.

# uname -a
Linux beaglebone 4.4.9-ti-r25 #1 SMP Thu May 5 23:08:13 UTC 2016 armv7l GNU/Linux
root@beaglebone:~# cat /etc/debian_version
8.4

Ever since this commit, the OMAP watchdog driver has the magic close feature enabled. This means that closing the timer's device won't stop the timer from ticking. The only way to stop it is to send to it the magic character 'V' (a capital 'v').

# wdctl /dev/watchdog
wdctl: write failed: Invalid argument
Device:        /dev/watchdog
Identity:      OMAP Watchdog [version 0]
Timeout:       120 seconds
Timeleft:      119 seconds
FLAG           DESCRIPTION               STATUS BOOT-STATUS
KEEPALIVEPING  Keep alive ping reply          0           0
MAGICCLOSE     Supports magic close char      0           0
SETTIMEOUT     Set timeout (in seconds)       0           0

This feature is particularly useful if you want the watchdog timer to only be active when a specific application is running, and if you then want it to be stopped when the application is stopped normally.

Unfortunately, the kernel can be configure with a mode called "no way out", which means that even tough the magic close feature of the driver is enabled, it won't be honored at all, and you are doomed to ping your timer until the end of time once you opened the device.

# cat /proc/config.gz | gunzip | grep CONFIG_WATCHDOG_NOWAYOUT
CONFIG_WATCHDOG_NOWAYOUT=y

On a kernel version 3.8, the feature was not enabled:

$ cat /proc/config.gz | gunzip | grep CONFIG_WATCHDOG_NOWAYOUT
# CONFIG_WATCHDOG_NOWAYOUT is not set

So, how do we stop that thing?

Well, you can see in the code of the driver that the default value of the kernel can be overridden by a module param:

static bool nowayout = WATCHDOG_NOWAYOUT;
module_param(nowayout, bool, 0);
MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started "
	"(default=" __MODULE_STRING(WATCHDOG_NOWAYOUT) ")");

Edit the boot configuration in /boot/uEnv.txt and add set that parameter to 0 in the cmdline:

cmdline=coherent_pool=1M quiet cape_universal=enable omap_wdt.nowayout=0

Reboot the board, and check that the loaded command line was changed correctly:

# cat /proc/cmdline
console=tty0 console=ttyO0,115200n8 root=/dev/mmcblk0p1 rootfstype=ext4 rootwait coherent_pool=1M quiet cape_universal=enable omap_wdt.nowayout=0

That's it. Now if you send a 'V' to the watchdog right before closing it, it will be stopped.

How to debug python code using GDB on FreeBSD without compromising your system

Not so frequently asked questions and stuff: 

GDB's logoImageThe FreeBSD logo

Introduction

We want to be able to use GDB to debug python code efficiently.

Let's say we have the following code:

from threading import Event
import random
from time import sleep

def blocking_function_one():
	while True:
		sleep(1)

def blocking_function_two():
	e = Event()
	e.wait()

if random.random() > 0.5:
	blocking_function_one()
else:
	blocking_function_two()

That code will block, and since it doesn't output anything, we have no way of knowing if we went into blocking_function_one or blocking_function_two. Or do we?

For reference, I'm running a 10.2-RELEASE:

# uname -a
FreeBSD bsdlab 10.2-RELEASE FreeBSD 10.2-RELEASE #0 r286666: Wed Aug 12 15:26:37 UTC 2015     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Step 1: installing a debug version of python

We're going to work in a separate directory, in order not to alter our installation. If we need this in production, we want to be able to let the system as clean as possible when we leave.

# mkdir /usr/local/python-debug

Build python 3.4 from the port collection and set it to be installed in the directory we just created:

# cd /usr/ports/lang/python34
# make install PREFIX=/usr/local/python-debug OPTIONS_FILE_SET+=DEBUG BATCH=1

Normally we would have used NO_PKG_REGISTER=1 to install the package without registering it on the system. Unfortunately, this option is not working anymore (see bug https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=182347).

So, let's copy the files ourselves:

# cp -r work/stage/usr/local/python-debug/* /usr/local/python-debug/

Let's try to run that new python installation:

# setenv LD_LIBRARY_PATH /usr/local/python-debug/lib
# /usr/local/python-debug/bin/python3.4dm
Python 3.4.5 (default, Sep  4 2016, 00:42:59)
[GCC 4.2.1 Compatible FreeBSD Clang 3.4.1 (tags/RELEASE_34/dot1-final 208032)] on freebsd10
Type "help", "copyright", "credits" or "license" for more information.
>>>

The "d" in "dm" means "debug".

Building python also produced an important file that we need to save for later before cleaning the work tree:

# cp work/Python-3.4.5/python-gdb.py ~/

Step 2: building a python aware GDB

Build GDB from the port collection:

  • Making sure the python extensions are enabled
  • Telling the configure script where to find our python installation
  • Telling the configure and build scripts where to find the relevant headers and libraries
# ln -s python3.4 /usr/local/python-debug/bin/python
# cd /usr/ports/devel/gdb
# make PREFIX=/usr/local/python-debug \
OPTIONS_FILE_SET+=PYTHON \
PYTHON_CMD=/usr/local/python-debug/bin \
BATCH=1 \
CFLAGS+="-I/usr/local/python-debug/include -L/usr/local/python-debug/lib" \
CXXFLAGS+="-I/usr/local/python-debug/include -L/usr/local/python-debug/lib"

Also copy that installation manually to our special directory:

# cp -r work/stage/usr/local/python-debug/* /usr/local/python-debug/

Let's check that it's working and has the python extensions:

# /usr/local/python-debug/bin/gdb
GNU gdb (GDB) 7.11.1 [GDB v7.11.1 for FreeBSD]
[...]
(gdb) python
>import gdb
>end
(gdb)

Step 3: wire it all together

Now we have:

  • A version of Python that integrates debug information.
  • A version of GDB that can run higher level GDB scripts written in Python.
  • A python-gdb script to add commands and macros.

Copy the GDB script somewhere where Python can load it:

# mkdir ~/.python_lib
# mv ~/python-gdb.py ~/.python_lib/python34_gdb.py

Let's run our stupid blocking script:

# setenv PATH "/usr/local/python-debug/bin/:${PATH}"
# python where-am-i-blocking.py
[blocked]

In another shell, find the PID of the script, and attach GDB there.

# ps auxw | grep python
root     24226   0.0  0.7  48664  15492  3  I+    3:00AM     0:00.13 python where-am-i-blocking.py (python3.4)
root     24235   0.0  0.1  18824   2004  4  S+    3:00AM     0:00.00 grep python

# setenv PATH "/usr/local/python-debug/bin/:${PATH}"
# gdb python 24226
GNU gdb (GDB) 7.11.1 [GDB v7.11.1 for FreeBSD]
[...]
[Switching to LWP 100160 of process 24226]
0x00000008018e3f18 in _umtx_op () from /lib/libc.so.7
(gdb)

Load the GDB python script:

(gdb) python
>import sys
>sys.path.append('/root/.python_lib')
>import python34_gdb
>end

The python macros are now loaded:

(gdb) py
py-bt               py-down             py-locals           py-up               python-interactive
py-bt-full          py-list             py-print            python

Let's see where we are:

(gdb) py-bt
Traceback (most recent call first):
  <built-in method acquire of _thread.lock object at remote 0x80075c2a8>
  File "/usr/local/python-debug/lib/python3.4/threading.py", line 290, in wait
    waiter.acquire()
  File "/usr/local/python-debug/lib/python3.4/threading.py", line 546, in wait
    signaled = self._cond.wait(timeout)
  File "where-am-i-blocking.py", line 11, in blocking_function_two
    e.wait()
  File "where-am-i-blocking.py", line 16, in <module>
    blocking_function_two()

We're in blocking_function_two.

Let's check the wait's frame local variables:

(gdb) bt
#0  0x00000008018e3f18 in _umtx_op () from /lib/libc.so.7
#1  0x00000008018d3604 in sem_timedwait () from /lib/libc.so.7
#2  0x0000000800eb0421 in PyThread_acquire_lock_timed (lock=0x802417590, microseconds=-1, intr_flag=1) at Python/thread_pthread.h:352
#3  0x0000000800eba84f in acquire_timed (lock=0x802417590, microseconds=-1) at ./Modules/_threadmodule.c:71
#4  0x0000000800ebab82 in lock_PyThread_acquire_lock (self=0x80075c2a8, args=(), kwds=0x0) at ./Modules/_threadmodule.c:139
#5  0x0000000800cfa963 in PyCFunction_Call (func=<built-in method acquire of _thread.lock object at remote 0x80075c2a8>, arg=(), kw=0x0)
    at Objects/methodobject.c:99
#6  0x0000000800e31716 in call_function (pp_stack=0x7fffffff5a00, oparg=0) at Python/ceval.c:4237
#7  0x0000000800e29fc0 in PyEval_EvalFrameEx (
    f=Frame 0x80245d738, for file /usr/local/python-debug/lib/python3.4/threading.py, line 290, in wait (self=<Condition(_lock=<_thread.l
ock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object a
t remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>, timeout=No
ne, waiter=<_thread.lock at remote 0x80075c2a8>, saved_state=None, gotit=False), throwflag=0) at Python/ceval.c:2838
[...]
#25 0x0000000800eb4e86 in run_file (fp=0x801c1e140, filename=0x802418090 L"where-am-i-blocking.py", p_cf=0x7fffffffe978)
    at Modules/main.c:319
#26 0x0000000800eb3ab7 in Py_Main (argc=2, argv=0x802416090) at Modules/main.c:751
#27 0x0000000000400cae in main (argc=2, argv=0x7fffffffeaa8) at ./Modules/python.c:69

(gdb) frame 7
#7  0x0000000800e29fc0 in PyEval_EvalFrameEx (
    f=Frame 0x80245d738, for file /usr/local/python-debug/lib/python3.4/threading.py, line 290, in wait (self=<Condition(_lock=<_thread.lock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object at remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>, timeout=None, waiter=<_thread.lock at remote 0x80075c2a8>, saved_state=None, gotit=False), throwflag=0) at Python/ceval.c:2838
2838                res = call_function(&sp, oparg);

(gdb) py-locals
self = <Condition(_lock=<_thread.lock at remote 0x80075c510>, _waiters=<collections.deque at remote 0x800b0e9d8>, release=<built-in method release of _thread.lock object at remote 0x80075c510>, acquire=<built-in method acquire of _thread.lock object at remote 0x80075c510>) at remote 0x800a6fc88>
timeout = None
waiter = <_thread.lock at remote 0x80075c2a8>
saved_state = None
gotit = False

If you don't want to or can't attach to the running process, you can do the same thing with a core dump:

# gcore 24226
# gdb python core.24226

If you don't want to add the lib directory to the path of python everytime you use GDB, add it to your profile's GDB init script:

cat > ~/.gdbinit <<EOF
python
import sys
sys.path.append('/root/.python_lib')
end
EOF

You'll only need to import the module (python import python34_gdb) and you'll be good to go.

More ressources

Bonus problem: loading Debian's libc's debug info on a armhf

I've done the exact same thing on a Beagle Bone Black system running Debian.

Unfortunately GDB was complaining that the stack was corrupt.

# gdb /usr/local/opt/python-3.4.4/bin/python3.4dm core.18513
GNU gdb (GDB) 7.11
[...]
Reading symbols from /usr/local/opt/python-3.4.4/bin/python3.4dm...done.
[New LWP 18513]
[New LWP 18531]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
Core was generated by `python3.4dm'.
#0  0xb6f7d7e0 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
[Current thread is 1 (Thread 0xb6fac000 (LWP 18513))]
(gdb) bt
#0  0xb6f7d7e0 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
#1  0xb6f7d7d4 in recv () from /lib/arm-linux-gnueabihf/libpthread.so.0
#2  0xb64ae6f8 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb)

A search on the internet indicated that it was because I was missing package libc6-dbg, but that package was installed.

# apt-get install libc6-dbg
Reading package lists... Done
Building dependency tree
Reading state information... Done
libc6-dbg is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

The problem was that using my custom installation directory had GDB look for these files in the wrong place.

(gdb) show debug-file-directory
The directory where separate debug symbols are searched for is "/usr/local/opt/python-3.4.4/lib/debug".

Setting that variable in the init file solves the problem:

cat >> ~/.gdbinit <<EOF
set debug-file-directory /usr/lib/debug
EOF
[Current thread is 1 (Thread 0xb6fac000 (LWP 18513))]
(gdb) bt
#0  0xb6f7d7e0 in recv () at ../sysdeps/unix/syscall-template.S:82
#1  0xb68fe18e in sock_recv_guts (s=0xb64ae6f8, cbuf=0x50e2e8 '\313' <repeats 199 times>, <incomplete sequence \313>..., len=65536, flags=0)
    at /tmp/Python-3.4.4/Modules/socketmodule.c:2600
[...]
#38 0x00025752 in PyRun_AnyFileExFlags (fp=0x32bcc8, filename=0xb6be5310 "main.py", closeit=1, flags=0xbed1db20) at Python/pythonrun.c:1287
#39 0x0003b1ee in run_file (fp=0x32bcc8, filename=0x2c99f0 L"main.py", p_cf=0xbed1db20) at Modules/main.c:319
#40 0x0003beb8 in Py_Main (argc=2, argv=0x2c9010) at Modules/main.c:751
#41 0x000208d8 in main (argc=2, argv=0xbed1dd14) at ./Modules/python.c:69

How to install PyInstaller in a Python Virtual-Env on FreeBSD

Not so frequently asked questions and stuff: 

ImageThe FreeBSD logo

# uname -a
FreeBSD freebsderlang 10.3-RELEASE FreeBSD 10.3-RELEASE #0 r297264: Fri Mar 25 02:10:02 UTC 2016     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Install Python and VirtualEnv

Install Python:

# make -C /usr/ports/lang/python34/ install clean

Install virtualenv:

# setenv PYTHON_VERSION python3.4
# make -C /usr/ports/devel/py-virtualenv install clean

Create a virtual env:

# virtualenv-3.4 venv
Using base prefix '/usr/local'
New python executable in /root/somewhere/venv/bin/python3.4
Also creating executable in /root/somewhere/venv/bin/python
Installing setuptools, pip, wheel...done.

Dive into it:

# source venv/bin/activate.csh

Download, build and install PyInstaller

Download the latest version of PyInstaller, check that it was correctly downloaded, and extract it:

[venv] # fetch 'https://github.com/pyinstaller/pyinstaller/releases/download/v3.2/PyInstaller-3.2.tar.gz' --no-verify-peer
[venv] # sha256 PyInstaller-3.2.tar.gz
SHA256 (PyInstaller-3.2.tar.gz) = 7598d4c9f5712ba78beb46a857a493b1b93a584ca59944b8e7b6be00bb89cabc
[venv] # tar xzf PyInstaller-3.2.tar.gz

Go into the bootloader directory and build all:

[venv] # cd PyInstaller-3.2/bootloader/
[venv] # python waf all

Go to the release and build and install as usual:

[venv] # cd ..
[venv] # python setup.py install

Test PyInstaller

[venv] # cat > some_python_script.py << EOF
print("Je suis une saucisse")
EOF
[venv] # pyinstaller --onefile some_python_script.py
[venv] # dist/some_python_script
Je suis une saucisse

Creating a simple git repository server (with ACLs) on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoImage

Let's create a simple git server on FreeBSD.

It should:

  • Allow people to clone/pull/push using both SSH and HTTP.
  • Have a web view.
  • Have ACLs to allow repositories to only be visible and/or accessible by some specific users.

SSH interaction: gitolite

Let's install gitolite. It handles SSH connections and have the ACL functionality we're after.

First, here's a good read about how gitolite works: http://gitolite.com/gitolite/how.html#%281%29

On the git server

Install gitolite:

# make -C /usr/ports/devel/gitolite/ install clean

Copy your public key on the server, naming it [username].pub. That username will be considered the admin user.

Create a UNIX user that will own the files:

# pw useradd gitolite
# mkdir /home/gitolite
# chown gitolite:gitolite /home/gitolite
# cd /home/gitolite

Login as the UNIX user and initialize the system:

# sudo -s -u gitolite
% id
uid=1003(gitolite) gid=1003(gitolite) groups=1003(gitolite)
% /usr/local/bin/gitolite setup -pk admin.pub

Notice that the admin user can login using SSH, and that it will only execute gitolite's shell:

% cat .ssh/authorized_keys
command="/usr/local/libexec/gitolite/gitolite-shell admin",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa [some key]== zwm@git.example.net

That's all you need to do on the server.

On your client

Creating users and repositories

Clone the admin repository.

# git clone gitolite@git.example.net:gitolite-admin

Create two new keys (and thus users) and add them to the repository:

# ssh-keygen -t rsa -f erika
# ssh-keygen -t rsa -f jean

# cp erika.pub gitolite-admin/keydir
# cp jean.pub gitolite-admin/keydir

# git add keydir/jean.pub
# git add keydir/erika.pub
# git commit -m "Add users Jean and Erika."
# git push origin master

Create new repositories by setting their ACLs in the config file:

# cat conf/gitolite.conf:

repo gitolite-admin
    RW+     =   admin

repo testing
    RW+     =   @all

repo erika_only
    RW+     =   erika

repo erika_and_jean
    RW+     =   erika jean

# git add conf/gitolite.conf
# git commit -m "Add two new repos"
# git push origin master

Using the server

Try to clone repository erika_only with user jean:

# setenv GIT_SSH_COMMAND 'ssh -i jean'
# git clone gitolite@git.example.net:erika_only
Cloning into 'erika_only'...
FATAL: R any erika_only jean DENIED by fallthru
(or you mis-spelled the reponame)
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

Our access was denied. ACLs are working.

Try to clone a ACL allowed repository:

# git clone gitolite@git.example.net:erika_and_jean
# cd 
# echo "Test" > test.txt
# git add test.txt
# git commit -m "Test commit"
# git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 218 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To gitolite@git.example.net:erika_and_jean
 * [new branch]      master -> master

Success.

HTTP interaction: nginx+git-http-backend

I assume you already know how to install and do the basic configuration of nginx.

Install fcgiwrap:

# make -C /usr/ports/www/fcgiwrap install clean

Configure fcgiwrap to use the right UNIX user:
/etc/rc.conf:

fcgiwrap_enable="YES"
fcgiwrap_user="gitolite"
fcgiwrap_profiles="gitolite"
fcgiwrap_gitolite_socket="tcp:198.51.100.42:7081"

Create a password file:

# cat /usr/local/etc/nginx/git_users.htpasswd
jean:$apr1$fkADkYbl$Doen7IMxNwmD/r6X1LdM.1
erika:$apr1$fOOlnSig$4PONnRHK3PMu8j1HnxECc0

Use openssl passwd -apr1 to generate passwords.

Configure nginx:

    server {
        [usual config here]

        auth_basic           "RESTRICTED ACCESS";
        auth_basic_user_file /usr/local/etc/nginx/git_users.htpasswd;
        client_max_body_size 256m;

        location ~ /git(/.*) {
            root /home/gitolite/;
            fastcgi_split_path_info ^(/git)(.*)$;
            fastcgi_param PATH_INFO $fastcgi_path_info;
            fastcgi_param SCRIPT_FILENAME     /usr/local/libexec/gitolite/gitolite-shell;
            fastcgi_param QUERY_STRING $query_string;
            fastcgi_param REMOTE_USER        $remote_user;

            fastcgi_param GIT_PROJECT_ROOT    /home/gitolite/repositories;
            fastcgi_param GIT_HTTP_BACKEND /usr/local/libexec/git-core/git-http-backend;
            fastcgi_param GITOLITE_HTTP_HOME /home/gitolite;
            fastcgi_param GIT_HTTP_EXPORT_ALL "";

            # This include must be AFTER the above declaration. Otherwise, SCRIPT_FILENAME will be set incorrectly and the shell will 403.
            include       fastcgi_params;
            fastcgi_pass 198.51.100.42:7081;
        }
    }

Here we call gitolite-shell instead of git-http-backend directly to have gitolite check the users' permissions.

Let's clone a repository, add a commit and push it:

# git clone 'http://jean:lol@git.example.net:8080/git/erika_and_jean.git' erika_and_jean
Cloning into 'erika_and_jean'...
remote: Counting objects: 3, done.
remote: Total 3 (delta 0), reused 0 (delta 0)
Unpacking objects: 100% (3/3), done.
Checking connectivity... done.

# cd erika_and_jean/
root@test:~/gitolite2/erika_and_jean # vim test.txt
root@test:~/gitolite2/erika_and_jean # git add test.txt
root@test:~/gitolite2/erika_and_jean # git commit -m "Pushed from HTTP"
[master 7604185] Pushed from HTTP
 1 file changed, 1 insertion(+)

# git push origin master
Counting objects: 3, done.
Writing objects: 100% (3/3), 258 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To http://jean:lol@git.example.net:8080/git/erika_and_jean.git
   fa03b7d..7604185  master -> master

Let's try to clone a repository we're not allowed to see:

# git clone 'http://jean:lol@git.example.net:8080/git/erika.git' erika
Cloning into 'erika'...
fatal: remote error: FATAL: R any erika jean DENIED by fallthru
(or you mis-spelled the reponame)

ACLs are working. Success.

Web view: GitWeb

Make sure git is compiled with option GITWEB.

Copy the gitweb files where nginx will look for them:

# cp -r /usr/local/share/examples/git/gitweb /usr/local/www/gitweb

Configure nginx:

      location / {
          root /usr/local/www/gitweb;
          index gitweb.cgi;

          location ~ ^/(.*\.cgi)$ {
              include  fastcgi_params;
              fastcgi_pass 198.51.100.42:7081;
              fastcgi_index gitweb.cgi;
              fastcgi_param SCRIPT_FILENAME /usr/local/www/gitweb/gitweb.cgi;
              fastcgi_param DOCUMENT_ROOT /usr/local/www/gitweb;
              fastcgi_param GITWEB_CONFIG /usr/local/etc/gitweb.conf;
              fastcgi_param REMOTE_USER        $remote_user;
          }
      }

No magic here. The Gitolite/GitWeb interaction is irrelevant to the webserver.

Use the gitolite command to find the values of the GL_ variables:

gitolite query-rc -a

Configure gitweb in /usr/local/etc/gitweb.conf:

BEGIN {
    $ENV{HOME} = "/home/gitolite";
    $ENV{GL_BINDIR} = "/usr/local/libexec/gitolite";
    $ENV{GL_LIBDIR} = "/usr/local/libexec/gitolite/lib";
}

use lib $ENV{GL_LIBDIR};
use Gitolite::Easy;

$projectroot = $ENV{GL_REPO_BASE};
our $site_name = "Example.net Git viewer";

$ENV{GL_USER} = $cgi->remote_user || "gitweb";

$export_auth_hook = sub {
    my $repo = shift;
    # gitweb passes us the full repo path; we need to strip the beginning and
    # the end, to get the repo name as it is specified in gitolite conf
    return unless $repo =~ s/^\Q$projectroot\E\/?(.+)\.git$/$1/;

    # call Easy.pm's 'can_read' function
    return can_read($repo);
};

When connected as erika:

Image

When connected as jean:

Image

ACLs are working. Success.

Conclusion

Our users can now see, read and sometimes write into the repositories of our git server.

You can create guest accounts that will only be able to see specific repositories, and they won't even know the other ones are here.

No need to maintain a gitlab instance if your needs are simple.

Building and running Couchbase on FreeBSD

Not so frequently asked questions and stuff: 

The FreeBSD logoThe Couchbase logo

Lets's try to build and run Couchbase on FreeBSD!

The system I'm using here is completely new.

# uname -a
FreeBSD couchbasebsd 10.2-RELEASE FreeBSD 10.2-RELEASE #0 r286666: Wed Aug 12 15:26:37 UTC 2015     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

Fetching the source

Let's download repo, Google's tool to fetch multiple git repositories at once.

# fetch https://storage.googleapis.com/git-repo-downloads/repo -o /root/bin/repo --no-verify-peer
/root/bin/repo                                100% of   25 kB 1481 kBps 00m00s

I don't have any certificate bundle installed, so I need --no-verify-peer to prevent openssl from complaining. In that case I must verify that the file is correct before executing it.

# sha1 /root/bin/repo
SHA1 (/root/bin/repo) = da0514e484f74648a890c0467d61ca415379f791

The list of SHA1s can be found in Android Open Source Project - Downloading the Source .

Make it executable.

# chmod +x /root/bin/repo

Create a directory to work in.

# mkdir couchbase && cd couchbase

I'll be fetching branch 3.1.1, which is the latest release at the time I'm writing this.

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml
env: python: No such file or directory

Told you the system was brand new.

# make -C /usr/ports/lang/python install clean

Let's try again.

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

fatal: 'git' is not available
fatal: [Errno 2] No such file or directory

Please make sure git is installed and in your path.

Install git:

# make -C /usr/ports/devel/git install clean

Try again:

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

Traceback (most recent call last):
  File "/root/couchbase/.repo/repo/main.py", line 526, in <module>
    _Main(sys.argv[1:])
  File "/root/couchbase/.repo/repo/main.py", line 502, in _Main
    result = repo._Run(argv) or 0
  File "/root/couchbase/.repo/repo/main.py", line 175, in _Run
    result = cmd.Execute(copts, cargs)
  File "/root/couchbase/.repo/repo/subcmds/init.py", line 395, in Execute
    self._ConfigureUser()
  File "/root/couchbase/.repo/repo/subcmds/init.py", line 289, in _ConfigureUser
    name  = self._Prompt('Your Name', mp.UserName)
  File "/root/couchbase/.repo/repo/project.py", line 703, in UserName
    self._LoadUserIdentity()
  File "/root/couchbase/.repo/repo/project.py", line 716, in _LoadUserIdentity
    u = self.bare_git.var('GIT_COMMITTER_IDENT')
  File "/root/couchbase/.repo/repo/project.py", line 2644, in runner
    p.stderr))
error.GitError: manifests var:
*** Please tell me who you are.

Run

  git config --global user.email "you@example.com"
  git config --global user.name "Your Name"

to set your account's default identity.
Omit --global to set the identity only in this repository.

fatal: unable to auto-detect email address (got 'root@couchbasebsd.(none)')

Configure your git information:

# git config --global user.email "you@example.com"
# git config --global user.name "Your Name"

Try again:

# /root/bin/repo init -u git://github.com/couchbase/manifest -m released/3.1.1.xml

Your identity is: Your Name <you@example.com>
If you want to change this, please re-run 'repo init' with --config-name

[...]

repo has been initialized in /root/couchbase

Repo was initialized successfully. Let's sync!

# repo sync
[...]
Fetching projects: 100% (25/25), done.
Checking out files: 100% (2988/2988), done.ut files:  21% (641/2988)
Checking out files: 100% (11107/11107), done. files:   2% (236/11107)
Checking out files: 100% (3339/3339), done.ut files:  11% (379/3339)
Checking out files: 100% (1256/1256), done.ut files:  48% (608/1256)
Checking out files: 100% (4298/4298), done.ut files:   0% (27/4298)
Syncing work tree: 100% (25/25), done.

We now have our source environment setup.

Building

Let's invoke the makefile.

# gmake
(cd build && cmake -G "Unix Makefiles" -D CMAKE_INSTALL_PREFIX="/root/couchbase/install" -D CMAKE_PREFIX_PATH=";/root/couchbase/install" -D PRODUCT_VERSION= -D BUILD_ENTERPRISE= -D CMAKE_BUILD_TYPE=Debug  ..)
cmake: not found
Makefile:42: recipe for target 'build/Makefile' failed
gmake[1]: *** [build/Makefile] Error 127
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

CMake is missing? Let's install it.

# make -C /usr/ports/devel/cmake install clean

Let's try again...

# gmake

CMake Error at tlm/cmake/Modules/FindCouchbaseTcMalloc.cmake:38 (MESSAGE):
  Can not find tcmalloc.  Exiting.
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseMemoryAllocator.cmake:3 (INCLUDE)
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:11 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

-- Configuring incomplete, errors occurred!
See also "/root/couchbase/build/CMakeFiles/CMakeOutput.log".
Makefile:42: recipe for target 'build/Makefile' failed
gmake[1]: *** [build/Makefile] Error 1
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

What the hell is the system looking for?

# cat tlm/cmake/Modules/FindCouchbaseTcMalloc.cmake
[...]
FIND_PATH(TCMALLOC_INCLUDE_DIR gperftools/malloc_hook_c.h
          PATHS
              ${_gperftools_exploded}/include)
[...]

Where is that malloc_hook_c.h?

# grep -R gperftools/malloc_hook_c.h *
gperftools/Makefile.am:                                    src/gperftools/malloc_hook_c.h \
gperftools/Makefile.am:                               src/gperftools/malloc_hook_c.h \
gperftools/Makefile.am:##                           src/gperftools/malloc_hook_c.h \
gperftools/src/google/malloc_hook_c.h:#warning "google/malloc_hook_c.h is deprecated. Use gperftools/malloc_hook_c.h instead"
gperftools/src/google/malloc_hook_c.h:#include <gperftools/malloc_hook_c.h>
gperftools/src/gperftools/malloc_hook.h:#include <gperftools/malloc_hook_c.h>  // a C version of the malloc_hook interface
gperftools/src/tests/malloc_extension_c_test.c:#include <gperftools/malloc_hook_c.h>
[...]

It's in directory gperftools. Let's build that module first.

# cd gperftools/

# ./autogen.sh

# ./configure
[...]
config.status: creating Makefile
config.status: creating src/gperftools/tcmalloc.h
config.status: creating src/windows/gperftools/tcmalloc.h
config.status: creating src/config.h
config.status: executing depfiles commands
config.status: executing libtool commands

# make && make install

Let's try to build again.

# cd ..

# make
CMake Error at tlm/cmake/Modules/FindCouchbaseIcu.cmake:108 (MESSAGE):
  Can't build Couchbase without ICU
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:16 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

ICU is missing? Let's install it.

# make -C /usr/ports/devel/icu install clean

Let's try again.

# make
CMake Error at tlm/cmake/Modules/FindCouchbaseSnappy.cmake:34 (MESSAGE):
  Can't build Couchbase without Snappy
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:17 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

snappy is missing? Let's install it.

# make -C /usr/ports/archivers/snappy install

Do not make the mistake of installing multimedia/snappy instead. This is a totally unrelated module, and it will install 175 crappy Linux/X11 dependencies on your system.

Let's try again:

# gmake
CMake Error at tlm/cmake/Modules/FindCouchbaseV8.cmake:52 (MESSAGE):
  Can't build Couchbase without V8
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:18 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

V8 is missing? Let's install it.

# make -C /usr/ports/lang/v8 install clean

Let's try again.

# gmake
CMake Error at tlm/cmake/Modules/FindCouchbaseErlang.cmake:80 (MESSAGE):
  Erlang not found - cannot continue building
Call Stack (most recent call first):
  tlm/cmake/Modules/CouchbaseSingleModuleBuild.cmake:21 (INCLUDE)
  CMakeLists.txt:12 (INCLUDE)

Erlang FTW!

# make -C /usr/ports/lang/erlang install clean

Let's build again.

/root/couchbase/platform/src/cb_time.c:60:2: error: "Don't know how to build cb_get_monotonic_seconds"
#error "Don't know how to build cb_get_monotonic_seconds"
 ^
1 error generated.
platform/CMakeFiles/platform.dir/build.make:169: recipe for target 'platform/CMakeFiles/platform.dir/src/cb_time.c.o' failed
gmake[4]: *** [platform/CMakeFiles/platform.dir/src/cb_time.c.o] Error 1
CMakeFiles/Makefile2:285: recipe for target 'platform/CMakeFiles/platform.dir/all' failed
gmake[3]: *** [platform/CMakeFiles/platform.dir/all] Error 2
Makefile:126: recipe for target 'all' failed
gmake[2]: *** [all] Error 2
Makefile:36: recipe for target 'compile' failed
gmake[1]: *** [compile] Error 2
GNUmakefile:5: recipe for target 'all' failed
gmake: *** [all] Error 2

At last! A real error. Let's see the code.

# cat platform/src/cb_time.c

/*
    return a monotonically increasing value with a seconds frequency.
*/
uint64_t cb_get_monotonic_seconds() {
    uint64_t seconds = 0;
#if defined(WIN32)
    /* GetTickCound64 gives us near 60years of ticks...*/
    seconds =  (GetTickCount64() / 1000);
#elif defined(__APPLE__)
    uint64_t time = mach_absolute_time();

    static mach_timebase_info_data_t timebase;
    if (timebase.denom == 0) {
      mach_timebase_info(&timebase);
    }

    seconds = (double)time * timebase.numer / timebase.denom * 1e-9;
#elif defined(__linux__) || defined(__sun)
    /* Linux and Solaris can use clock_gettime */
    struct timespec tm;
    if (clock_gettime(CLOCK_MONOTONIC, &tm) == -1) {
        abort();
    }
    seconds = tm.tv_sec;
#else
#error "Don't know how to build cb_get_monotonic_seconds"
#endif

    return seconds;
}

FreeBSD also has clock_gettime, so let's patch the file:

diff -u platform/src/cb_time.c.orig platform/src/cb_time.c
--- platform/src/cb_time.c.orig 2015-10-07 19:26:14.258513000 +0200
+++ platform/src/cb_time.c      2015-10-07 19:26:29.768324000 +0200
@@ -49,7 +49,7 @@
     }

     seconds = (double)time * timebase.numer / timebase.denom * 1e-9;
-#elif defined(__linux__) || defined(__sun)
+#elif defined(__linux__) || defined(__sun) || defined(__FreeBSD__)
     /* Linux and Solaris can use clock_gettime */
     struct timespec tm;
     if (clock_gettime(CLOCK_MONOTONIC, &tm) == -1) {

Next error, please.

# gmake
Linking CXX shared library libplatform.so
/usr/bin/ld: cannot find -ldl
CC: error: linker command failed with exit code 1 (use -v to see invocation)
platform/CMakeFiles/platform.dir/build.make:210: recipe for target 'platform/libplatform.so.0.1.0' failed
gmake[4]: *** [platform/libplatform.so.0.1.0] Error 1

Aaah, good old Linux dl library. Let's get rid of that in the Makefile:

diff -u CMakeLists.txt.orig CMakeLists.txt
--- CMakeLists.txt.orig 2015-10-07 19:30:45.546580000 +0200
+++ CMakeLists.txt      2015-10-07 19:36:27.052693000 +0200
@@ -34,7 +34,9 @@
 ELSE (WIN32)
    SET(PLATFORM_FILES src/cb_pthreads.c src/urandom.c)
    SET(THREAD_LIBS "pthread")
-   SET(DLOPENLIB "dl")
+   IF(NOT CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      SET(DLOPENLIB "dl")
+   ENDIF(NOT CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")

    IF (NOT APPLE)
       SET(RTLIB "rt")

Next!

FreeBSD has Dtrace, but not the same as Solaris, so we must disable it.

Someone already did that: see commit Disable DTrace for FreeBSD for the patch:

--- a/cmake/Modules/FindCouchbaseDtrace.cmake
+++ b/cmake/Modules/FindCouchbaseDtrace.cmake
@@ -1,18 +1,19 @@
-# stupid systemtap use a binary named dtrace as well..
+# stupid systemtap use a binary named dtrace as well, but it's not dtrace
+IF (NOT CMAKE_SYSTEM_NAME STREQUAL "Linux")
+   IF (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      MESSAGE(STATUS "We don't have support for DTrace on FreeBSD")
+   ELSE (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+      FIND_PROGRAM(DTRACE dtrace)
+      IF (DTRACE)
+         SET(ENABLE_DTRACE True CACHE BOOL "Whether DTrace has been found")
+         MESSAGE(STATUS "Found dtrace in ${DTRACE}")
 
-IF (NOT ${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
+         IF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
+            SET(DTRACE_NEED_INSTUMENT True CACHE BOOL
+                "Whether DTrace should instrument object files")
+         ENDIF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
+      ENDIF (DTRACE)
 
-FIND_PROGRAM(DTRACE dtrace)
-IF (DTRACE)
-   SET(ENABLE_DTRACE True CACHE BOOL "Whether DTrace has been found")
-   MESSAGE(STATUS "Found dtrace in ${DTRACE}")
-
-   IF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
-      SET(DTRACE_NEED_INSTUMENT True CACHE BOOL
-          "Whether DTrace should instrument object files")
-   ENDIF (CMAKE_SYSTEM_NAME MATCHES "SunOS")
-ENDIF (DTRACE)
-
-MARK_AS_ADVANCED(DTRACE_NEED_INSTUMENT ENABLE_DTRACE DTRACE)
-
-ENDIF (NOT ${CMAKE_SYSTEM_NAME} STREQUAL "Linux")
+      MARK_AS_ADVANCED(DTRACE_NEED_INSTUMENT ENABLE_DTRACE DTRACE)
+   ENDIF (CMAKE_SYSTEM_NAME STREQUAL "FreeBSD")
+ENDIF (NOT CMAKE_SYSTEM_NAME STREQUAL "Linux")

Next!

# gmake
Linking C executable couch_compact
libcouchstore.so: undefined reference to `fdatasync'
cc: error: linker command failed with exit code 1 (use -v to see invocation)
couchstore/CMakeFiles/couch_compact.dir/build.make:89: recipe for target 'couchstore/couch_compact' failed
gmake[4]: *** [couchstore/couch_compact] Error 1
CMakeFiles/Makefile2:1969: recipe for target 'couchstore/CMakeFiles/couch_compact.dir/all' failed

FreeBSD does not have fdatasync. Instead, we should use fsync:

diff -u couchstore/config.cmake.h.in.orig couchstore/config.cmake.h.in
--- couchstore/config.cmake.h.in.orig   2015-10-07 19:56:05.461932000 +0200
+++ couchstore/config.cmake.h.in        2015-10-07 19:56:42.973040000 +0200
@@ -38,10 +38,10 @@
 #include <unistd.h>
 #endif

-#ifdef __APPLE__
-/* autoconf things OS X has fdatasync but it doesn't */
+#if defined(__APPLE__) || defined(__FreeBSD__)
+/* autoconf things OS X  and FreeBSD have fdatasync but they don't */
 #define fdatasync(FD) fsync(FD)
-#endif /* __APPLE__ */
+#endif /* __APPLE__ || __FreeBSD__ */

 #include <platform/platform.h>

Next!

[ 56%] Building C object sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o
/root/couchbase/sigar/src/sigar.c:1071:12: fatal error: 'utmp.h' file not found
#  include <utmp.h>
           ^
1 error generated.
sigar/build-src/CMakeFiles/sigar.dir/build.make:77: recipe for target 'sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o' failed
gmake[4]: *** [sigar/build-src/CMakeFiles/sigar.dir/sigar.c.o] Error 1
CMakeFiles/Makefile2:4148: recipe for target 'sigar/build-src/CMakeFiles/sigar.dir/all' failed

I was planning to port that file to utmpx, and then I wondered how the freebsd port of the library (java/sigar) was working. Then I found the patch has already been done:

Commit "Make utmp-handling more standards-compliant. " on Github -> amishHammer -> sigar (https://github.com/amishHammer/sigar/commit/67b476efe0f2a7c644f3966b79f5e358f67752e9)

diff --git a/src/sigar.c b/src/sigar.c
index 8bd7e91..7f76dfd 100644
--- a/src/sigar.c
+++ b/src/sigar.c
@@ -30,6 +30,11 @@
 #ifndef WIN32
 #include <arpa/inet.h>
 #endif
+#if defined(HAVE_UTMPX_H)
+# include <utmpx.h>
+#elif defined(HAVE_UTMP_H)
+# include <utmp.h>
+#endif
 
 #include "sigar.h"
 #include "sigar_private.h"
@@ -1024,40 +1029,7 @@ SIGAR_DECLARE(int) sigar_who_list_destroy(sigar_t *sigar,
     return SIGAR_OK;
 }
 
-#ifdef DARWIN
-#include <AvailabilityMacros.h>
-#endif
-#ifdef MAC_OS_X_VERSION_10_5
-#  if MAC_OS_X_VERSION_MIN_REQUIRED >= MAC_OS_X_VERSION_10_5
-#    define SIGAR_NO_UTMP
-#  endif
-/* else 10.4 and earlier or compiled with -mmacosx-version-min=10.3 */
-#endif
-
-#if defined(__sun)
-#  include <utmpx.h>
-#  define SIGAR_UTMP_FILE _UTMPX_FILE
-#  define ut_time ut_tv.tv_sec
-#elif defined(WIN32)
-/* XXX may not be the default */
-#define SIGAR_UTMP_FILE "C:\\cygwin\\var\\run\\utmp"
-#define UT_LINESIZE    16
-#define UT_NAMESIZE    16
-#define UT_HOSTSIZE    256
-#define UT_IDLEN   2
-#define ut_name ut_user
-
-struct utmp {
-    short ut_type; 
-    int ut_pid;        
-    char ut_line[UT_LINESIZE];
-    char ut_id[UT_IDLEN];
-    time_t ut_time;    
-    char ut_user[UT_NAMESIZE]; 
-    char ut_host[UT_HOSTSIZE]; 
-    long ut_addr;  
-};
-#elif defined(NETWARE)
+#if defined(NETWARE)
 static char *getpass(const char *prompt)
 {
     static char password[BUFSIZ];
@@ -1067,109 +1039,48 @@ static char *getpass(const char *prompt)
 
     return (char *)&password;
 }
-#elif !defined(SIGAR_NO_UTMP)
-#  include <utmp.h>
-#  ifdef UTMP_FILE
-#    define SIGAR_UTMP_FILE UTMP_FILE
-#  else
-#    define SIGAR_UTMP_FILE _PATH_UTMP
-#  endif
-#endif
-
-#if defined(__FreeBSD__) || defined(__OpenBSD__) || defined(__NetBSD__) || defined(DARWIN)
-#  define ut_user ut_name
 #endif
 
-#ifdef DARWIN
-/* XXX from utmpx.h; sizeof changed in 10.5 */
-/* additionally, utmpx does not work on 10.4 */
-#define SIGAR_HAS_UTMPX
-#define _PATH_UTMPX     "/var/run/utmpx"
-#define _UTX_USERSIZE   256     /* matches MAXLOGNAME */
-#define _UTX_LINESIZE   32
-#define _UTX_IDSIZE     4
-#define _UTX_HOSTSIZE   256
-struct utmpx {
-    char ut_user[_UTX_USERSIZE];    /* login name */
-    char ut_id[_UTX_IDSIZE];        /* id */
-    char ut_line[_UTX_LINESIZE];    /* tty name */
-    pid_t ut_pid;                   /* process id creating the entry */
-    short ut_type;                  /* type of this entry */
-    struct timeval ut_tv;           /* time entry was created */
-    char ut_host[_UTX_HOSTSIZE];    /* host name */
-    __uint32_t ut_pad[16];          /* reserved for future use */
-};
-#define ut_xtime ut_tv.tv_sec
-#define UTMPX_USER_PROCESS      7
-/* end utmpx.h */
-#define SIGAR_UTMPX_FILE _PATH_UTMPX
-#endif
-
-#if !defined(NETWARE) && !defined(_AIX)
-
 #define WHOCPY(dest, src) \
     SIGAR_SSTRCPY(dest, src); \
     if (sizeof(src) < sizeof(dest)) \
         dest[sizeof(src)] = '\0'
 
-#ifdef SIGAR_HAS_UTMPX
-static int sigar_who_utmpx(sigar_t *sigar,
-                           sigar_who_list_t *wholist)
+static int sigar_who_utmp(sigar_t *sigar,
+                          sigar_who_list_t *wholist)
 {
-    FILE *fp;
-    struct utmpx ut;
+#if defined(HAVE_UTMPX_H)
+    struct utmpx *ut;
 
-    if (!(fp = fopen(SIGAR_UTMPX_FILE, "r"))) {
-        return errno;
-    }
+    setutxent();
 
-    while (fread(&ut, sizeof(ut), 1, fp) == 1) {
+    while ((ut = getutxent()) != NULL) {
         sigar_who_t *who;
 
-        if (*ut.ut_user == '\0') {
+        if (*ut->ut_user == '\0') {
             continue;
         }
 
-#ifdef UTMPX_USER_PROCESS
-        if (ut.ut_type != UTMPX_USER_PROCESS) {
+        if (ut->ut_type != USER_PROCESS) {
             continue;
         }
-#endif
 
         SIGAR_WHO_LIST_GROW(wholist);
         who = &wholist->data[wholist->number++];
 
-        WHOCPY(who->user, ut.ut_user);
-        WHOCPY(who->device, ut.ut_line);
-        WHOCPY(who->host, ut.ut_host);
+        WHOCPY(who->user, ut->ut_user);
+        WHOCPY(who->device, ut->ut_line);
+        WHOCPY(who->host, ut->ut_host);
 
-        who->time = ut.ut_xtime;
+        who->time = ut->ut_tv.tv_sec;
     }
 
-    fclose(fp);
-
-    return SIGAR_OK;
-}
-#endif
-
-#if defined(SIGAR_NO_UTMP) && defined(SIGAR_HAS_UTMPX)
-#define sigar_who_utmp sigar_who_utmpx
-#else
-static int sigar_who_utmp(sigar_t *sigar,
-                          sigar_who_list_t *wholist)
-{
+    endutxent();
+#elif defined(HAVE_UTMP_H)
     FILE *fp;
-#ifdef __sun
-    /* use futmpx w/ pid32_t for sparc64 */
-    struct futmpx ut;
-#else
     struct utmp ut;
-#endif
-    if (!(fp = fopen(SIGAR_UTMP_FILE, "r"))) {
-#ifdef SIGAR_HAS_UTMPX
-        /* Darwin 10.5 */
-        return sigar_who_utmpx(sigar, wholist);
-#endif
+
+    if (!(fp = fopen(_PATH_UTMP, "r"))) {
         return errno;
     }
 
@@ -1189,7 +1100,7 @@ static int sigar_who_utmp(sigar_t *sigar,
         SIGAR_WHO_LIST_GROW(wholist);
         who = &wholist->data[wholist->number++];
 
-        WHOCPY(who->user, ut.ut_user);
+        WHOCPY(who->user, ut.ut_name);
         WHOCPY(who->device, ut.ut_line);
         WHOCPY(who->host, ut.ut_host);
 
@@ -1197,11 +1108,10 @@ static int sigar_who_utmp(sigar_t *sigar,
     }
 
     fclose(fp);
+#endif
 
     return SIGAR_OK;
 }
-#endif /* SIGAR_NO_UTMP */
-#endif /* NETWARE */
 
 #if defined(WIN32)

Next!

# gmake
[ 75%] Generating couch_btree.beam
compile: warnings being treated as errors
/root/couchbase/couchdb/src/couchdb/couch_btree.erl:415: variable 'NodeList' exported from 'case' (line 391)
/root/couchbase/couchdb/src/couchdb/couch_btree.erl:1010: variable 'NodeList' exported from 'case' (line 992)
couchdb/src/couchdb/CMakeFiles/couchdb.dir/build.make:151: recipe for target 'couchdb/src/couchdb/couch_btree.beam' failed
gmake[4]: *** [couchdb/src/couchdb/couch_btree.beam] Error 1
CMakeFiles/Makefile2:5531: recipe for target 'couchdb/src/couchdb/CMakeFiles/couchdb.dir/all' failed

Fortunately, I'm fluent in Erlang.

I'm not sure why compiler option +warn_export_vars was set if the code does contains such errors. Let's fix them.

diff -u /root/couchbase/couchdb/src/couchdb/couch_btree.erl /root/couchbase/couchdb/src/couchdb/couch_btree.erl.orig
--- /root/couchbase/couchdb/src/couchdb/couch_btree.erl 2015-10-07 22:01:05.191344000 +0200
+++ /root/couchbase/couchdb/src/couchdb/couch_btree.erl.orig    2015-10-07 21:59:43.359322000 +0200
@@ -388,12 +388,13 @@
     end.

 modify_node(Bt, RootPointerInfo, Actions, QueryOutput, Acc, PurgeFun, PurgeFunAcc, KeepPurging) ->
-    {NodeType, NodeList} = case RootPointerInfo of
+    case RootPointerInfo of
     nil ->
-        {kv_node, []};
+        NodeType = kv_node,
+        NodeList = [];
     _Tuple ->
         Pointer = element(1, RootPointerInfo),
-        get_node(Bt, Pointer)
+        {NodeType, NodeList} = get_node(Bt, Pointer)
     end,

     case NodeType of
@@ -988,12 +989,13 @@

 guided_purge(Bt, NodeState, GuideFun, GuideAcc) ->
     % inspired by modify_node/5
-    {NodeType, NodeList} = case NodeState of
+    case NodeState of
     nil ->
-        {kv_node, []};
+        NodeType = kv_node,
+        NodeList = [];
     _Tuple ->
         Pointer = element(1, NodeState),
-        get_node(Bt, Pointer)
+        {NodeType, NodeList} = get_node(Bt, Pointer)
     end,
     {ok, NewNodeList, GuideAcc2, Bt2, Go} =
     case NodeType of

diff -u /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl.orig /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl
--- /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl.orig        2015-10-07 22:01:48.495966000 +0200
+++ /root/couchbase/couchdb/src/couchdb/couch_compaction_daemon.erl     2015-10-07 22:02:15.620989000 +0200
@@ -142,14 +142,14 @@
         true ->
             {ok, DbCompactPid} = couch_db:start_compact(Db),
             TimeLeft = compact_time_left(Config),
-            case Config#config.parallel_view_compact of
+            ViewsMonRef = case Config#config.parallel_view_compact of
             true ->
                 ViewsCompactPid = spawn_link(fun() ->
                     maybe_compact_views(DbName, DDocNames, Config)
                 end),
-                ViewsMonRef = erlang:monitor(process, ViewsCompactPid);
+                erlang:monitor(process, ViewsCompactPid);
             false ->
-                ViewsMonRef = nil
+                nil
             end,
             DbMonRef = erlang:monitor(process, DbCompactPid),
             receive

Next!

[ 84%] Generating ebin/couch_set_view_group.beam
/root/couchbase/couchdb/src/couch_set_view/src/couch_set_view_group.erl:3178: type dict() undefined
couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/build.make:87: recipe for target 'couchdb/src/couch_set_view/ebin/couch_set_view_group.beam' failed
gmake[4]: *** [couchdb/src/couch_set_view/ebin/couch_set_view_group.beam] Error 1
CMakeFiles/Makefile2:5720: recipe for target 'couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/all' failed
gmake[3]: *** [couchdb/src/couch_set_view/CMakeFiles/couch_set_view.dir/all] Error 2

Those happen since I'm building the project with Erlang 18. I guess I wouldn't have had them with version 17.

Anyway, let's fix them.

--- couchdb.orig/src/couch_dcp/src/couch_dcp_client.erl 2015-10-08 11:26:37.034138000 +0200
+++ couchdb/src/couch_dcp/src/couch_dcp_client.erl  2015-10-07 22:07:35.556126000 +0200
@@ -47,13 +47,13 @@
     bufsocket = nil                 :: #bufsocket{} | nil,
     timeout = 5000                  :: timeout(),
     request_id = 0                  :: request_id(),
-    pending_requests = dict:new()   :: dict(),
-    stream_queues = dict:new()      :: dict(),
+    pending_requests = dict:new()   :: dict:dict(),
+    stream_queues = dict:new()      :: dict:dict(),
     active_streams = []             :: list(),
     worker_pid                      :: pid(),
     max_buffer_size = ?MAX_BUF_SIZE :: integer(),
     total_buffer_size = 0           :: non_neg_integer(),
-    stream_info = dict:new()        :: dict(),
+    stream_info = dict:new()        :: dict:dict(),
     args = []                       :: list()
 }).
 
@@ -1378,7 +1378,7 @@
         {error, Error}
     end.
 
--spec get_queue_size(queue(), non_neg_integer()) -> non_neg_integer().
+-spec get_queue_size(queue:queue(), non_neg_integer()) -> non_neg_integer().
 get_queue_size(EvQueue, Size) ->
     case queue:out(EvQueue) of
     {empty, _} ->
diff -r -u couchdb.orig/src/couch_set_view/src/couch_set_view_group.erl couchdb/src/couch_set_view/src/couch_set_view_group.erl
--- couchdb.orig/src/couch_set_view/src/couch_set_view_group.erl    2015-10-08 11:26:37.038856000 +0200
+++ couchdb/src/couch_set_view/src/couch_set_view_group.erl 2015-10-07 22:04:53.198951000 +0200
@@ -118,7 +118,7 @@
     auto_transfer_replicas = true      :: boolean(),
     replica_partitions = []            :: ordsets:ordset(partition_id()),
     pending_transition_waiters = []    :: [{From::{pid(), reference()}, #set_view_group_req{}}],
-    update_listeners = dict:new()      :: dict(),
+    update_listeners = dict:new()      :: dict:dict(),
     compact_log_files = nil            :: 'nil' | {[[string()]], partition_seqs(), partition_versions()},
     timeout = ?DEFAULT_TIMEOUT         :: non_neg_integer() | 'infinity'
 }).
@@ -3136,7 +3136,7 @@
     }.
 
 
--spec notify_update_listeners(#state{}, dict(), #set_view_group{}) -> dict().
+-spec notify_update_listeners(#state{}, dict:dict(), #set_view_group{}) -> dict:dict().
 notify_update_listeners(State, Listeners, NewGroup) ->
     case dict:size(Listeners) == 0 of
     true ->
@@ -3175,7 +3175,7 @@
     end.
 
 
--spec error_notify_update_listeners(#state{}, dict(), monitor_error()) -> dict().
+-spec error_notify_update_listeners(#state{}, dict:dict(), monitor_error()) -> dict:dict().
 error_notify_update_listeners(State, Listeners, Error) ->
     _ = dict:fold(
         fun(Ref, #up_listener{pid = ListPid, partition = PartId}, _Acc) ->
diff -r -u couchdb.orig/src/couch_set_view/src/mapreduce_view.erl couchdb/src/couch_set_view/src/mapreduce_view.erl
--- couchdb.orig/src/couch_set_view/src/mapreduce_view.erl  2015-10-08 11:26:37.040295000 +0200
+++ couchdb/src/couch_set_view/src/mapreduce_view.erl   2015-10-07 22:05:56.157242000 +0200
@@ -109,7 +109,7 @@
     convert_primary_index_kvs_to_binary(Rest, Group, [{KeyBin, V} | Acc]).
 
 
--spec finish_build(#set_view_group{}, dict(), string()) ->
+-spec finish_build(#set_view_group{}, dict:dict(), string()) ->
                           {#set_view_group{}, pid()}.
 finish_build(Group, TmpFiles, TmpDir) ->
     #set_view_group{
diff -r -u couchdb.orig/src/couchdb/couch_btree.erl couchdb/src/couchdb/couch_btree.erl
--- couchdb.orig/src/couchdb/couch_btree.erl    2015-10-08 11:26:37.049320000 +0200
+++ couchdb/src/couchdb/couch_btree.erl 2015-10-07 22:01:05.191344000 +0200
@@ -388,13 +388,12 @@
     end.
 
 modify_node(Bt, RootPointerInfo, Actions, QueryOutput, Acc, PurgeFun, PurgeFunAcc, KeepPurging) ->
-    case RootPointerInfo of
+    {NodeType, NodeList} = case RootPointerInfo of
     nil ->
-        NodeType = kv_node,
-        NodeList = [];
+        {kv_node, []};
     _Tuple ->
         Pointer = element(1, RootPointerInfo),
-        {NodeType, NodeList} = get_node(Bt, Pointer)
+        get_node(Bt, Pointer)
     end,
 
     case NodeType of
@@ -989,13 +988,12 @@
 
 guided_purge(Bt, NodeState, GuideFun, GuideAcc) ->
     % inspired by modify_node/5
-    case NodeState of
+    {NodeType, NodeList} = case NodeState of
     nil ->
-        NodeType = kv_node,
-        NodeList = [];
+        {kv_node, []};
     _Tuple ->
         Pointer = element(1, NodeState),
-        {NodeType, NodeList} = get_node(Bt, Pointer)
+        get_node(Bt, Pointer)
     end,
     {ok, NewNodeList, GuideAcc2, Bt2, Go} =
     case NodeType of
diff -r -u couchdb.orig/src/couchdb/couch_compaction_daemon.erl couchdb/src/couchdb/couch_compaction_daemon.erl
--- couchdb.orig/src/couchdb/couch_compaction_daemon.erl    2015-10-08 11:26:37.049734000 +0200
+++ couchdb/src/couchdb/couch_compaction_daemon.erl 2015-10-07 22:02:15.620989000 +0200
@@ -142,14 +142,14 @@
         true ->
             {ok, DbCompactPid} = couch_db:start_compact(Db),
             TimeLeft = compact_time_left(Config),
-            case Config#config.parallel_view_compact of
+            ViewsMonRef = case Config#config.parallel_view_compact of
             true ->
                 ViewsCompactPid = spawn_link(fun() ->
                     maybe_compact_views(DbName, DDocNames, Config)
                 end),
-                ViewsMonRef = erlang:monitor(process, ViewsCompactPid);
+                erlang:monitor(process, ViewsCompactPid);
             false ->
-                ViewsMonRef = nil
+                nil
             end,
             DbMonRef = erlang:monitor(process, DbCompactPid),
             receive
[ 98%] Generating ebin/vtree_cleanup.beam
compile: warnings being treated as errors
/root/couchbase/geocouch/vtree/src/vtree_cleanup.erl:32: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
/root/couchbase/geocouch/vtree/src/vtree_cleanup.erl:42: erlang:now/0: Deprecated BIF. See the "Time and Time Correction in Erlang" chapter of the ERTS User's Guide for more information.
../geocouch/build/vtree/CMakeFiles/vtree.dir/build.make:64: recipe for target '../geocouch/build/vtree/ebin/vtree_cleanup.beam' failed
gmake[4]: *** [../geocouch/build/vtree/ebin/vtree_cleanup.beam] Error 1
CMakeFiles/Makefile2:6702: recipe for target '../geocouch/build/vtree/CMakeFiles/vtree.dir/all' failed
gmake[3]: *** [../geocouch/build/vtree/CMakeFiles/vtree.dir/all] Error 2
diff -r -u geocouch.orig/gc-couchbase/src/spatial_view.erl geocouch/gc-couchbase/src/spatial_view.erl
--- geocouch.orig/gc-couchbase/src/spatial_view.erl 2015-10-08 11:29:05.323361000 +0200
+++ geocouch/gc-couchbase/src/spatial_view.erl  2015-10-07 22:17:09.741790000 +0200
@@ -166,7 +166,7 @@
 
 
 % Build the tree out of the sorted files
--spec finish_build(#set_view_group{}, dict(), string()) ->
+-spec finish_build(#set_view_group{}, dict:dict(), string()) ->
                           {#set_view_group{}, pid()}.
 finish_build(Group, TmpFiles, TmpDir) ->
     #set_view_group{
diff -r -u geocouch.orig/vtree/src/vtree_cleanup.erl geocouch/vtree/src/vtree_cleanup.erl
--- geocouch.orig/vtree/src/vtree_cleanup.erl   2015-10-08 11:29:05.327423000 +0200
+++ geocouch/vtree/src/vtree_cleanup.erl    2015-10-07 22:12:26.915600000 +0200
@@ -29,7 +29,7 @@
 cleanup(#vtree{root=nil}=Vt, _Nodes) ->
     Vt;
 cleanup(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = cleanup_multiple(Vt, PartitionedNodes, [Root]),
@@ -39,7 +39,7 @@
                       vtree_modify:write_new_root(Vt, KpNodes)
               end,
     ?LOG_DEBUG("Cleanup took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
 
 -spec cleanup_multiple(Vt :: #vtree{}, ToCleanup :: [#kv_node{}],
diff -r -u geocouch.orig/vtree/src/vtree_delete.erl geocouch/vtree/src/vtree_delete.erl
--- geocouch.orig/vtree/src/vtree_delete.erl    2015-10-08 11:29:05.327537000 +0200
+++ geocouch/vtree/src/vtree_delete.erl 2015-10-07 22:13:51.733064000 +0200
@@ -30,7 +30,7 @@
 delete(#vtree{root=nil}=Vt, _Nodes) ->
     Vt;
 delete(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = delete_multiple(Vt, PartitionedNodes, [Root]),
@@ -40,7 +40,7 @@
                       vtree_modify:write_new_root(Vt, KpNodes)
               end,
     ?LOG_DEBUG("Deletion took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
 
 
diff -r -u geocouch.orig/vtree/src/vtree_insert.erl geocouch/vtree/src/vtree_insert.erl
--- geocouch.orig/vtree/src/vtree_insert.erl    2015-10-08 11:29:05.327648000 +0200
+++ geocouch/vtree/src/vtree_insert.erl 2015-10-07 22:15:50.812447000 +0200
@@ -26,7 +26,7 @@
 insert(Vt, []) ->
     Vt;
 insert(#vtree{root=nil}=Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     % If we would do single inserts, the first node that was inserted would
     % have set the original Mbb `MbbO`
     MbbO = (hd(Nodes))#kv_node.key,
@@ -48,7 +48,7 @@
             ArbitraryBulkSize = round(math:log(Threshold)+50),
             Vt3 = insert_in_bulks(Vt2, Rest, ArbitraryBulkSize),
             ?LOG_DEBUG("Insertion into empty tree took: ~ps~n",
-                      [timer:now_diff(now(), T1)/1000000]),
+                      [erlang:monotonic_time(seconds) - T1]),
             ?LOG_DEBUG("Root pos: ~p~n", [(Vt3#vtree.root)#kp_node.childpointer]),
             Vt3;
         false ->
@@ -56,13 +56,13 @@
             Vt#vtree{root=Root}
     end;
 insert(Vt, Nodes) ->
-    T1 = now(),
+    T1 = erlang:monotonic_time(seconds),
     Root = Vt#vtree.root,
     PartitionedNodes = [Nodes],
     KpNodes = insert_multiple(Vt, PartitionedNodes, [Root]),
     NewRoot = vtree_modify:write_new_root(Vt, KpNodes),
     ?LOG_DEBUG("Insertion into existing tree took: ~ps~n",
-               [timer:now_diff(now(), T1)/1000000]),
+               [erlang:monotonic_time(seconds) - T1]),
     Vt#vtree{root=NewRoot}.
diff -u ns_server/deps/ale/src/ale.erl.orig ns_server/deps/ale/src/ale.erl
--- ns_server/deps/ale/src/ale.erl.orig 2015-10-07 22:19:28.730212000 +0200
+++ ns_server/deps/ale/src/ale.erl      2015-10-07 22:20:09.788761000 +0200
@@ -45,12 +45,12 @@

 -include("ale.hrl").

--record(state, {sinks   :: dict(),
-                loggers :: dict()}).
+-record(state, {sinks   :: dict:dict(),
+                loggers :: dict:dict()}).

 -record(logger, {name      :: atom(),
                  loglevel  :: loglevel(),
-                 sinks     :: dict(),
+                 sinks     :: dict:dict(),
                  formatter :: module()}).

 -record(sink, {name     :: atom(),
==> ns_babysitter (compile)
src/ns_crash_log.erl:18: type queue() undefined
../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/build.make:49: recipe for target '../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter' failed
gmake[4]: *** [../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter] Error 1
CMakeFiles/Makefile2:7484: recipe for target '../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/all' failed
gmake[3]: *** [../ns_server/build/deps/ns_babysitter/CMakeFiles/ns_babysitter.dir/all] Error 2
diff -r -u ns_server.orig/deps/ale/src/ale.erl ns_server/deps/ale/src/ale.erl
--- ns_server.orig/deps/ale/src/ale.erl 2015-10-08 11:31:20.520281000 +0200
+++ ns_server/deps/ale/src/ale.erl  2015-10-07 22:20:09.788761000 +0200
@@ -45,12 +45,12 @@
 
 -include("ale.hrl").
 
--record(state, {sinks   :: dict(),
-                loggers :: dict()}).
+-record(state, {sinks   :: dict:dict(),
+                loggers :: dict:dict()}).
 
 -record(logger, {name      :: atom(),
                  loglevel  :: loglevel(),
-                 sinks     :: dict(),
+                 sinks     :: dict:dict(),
                  formatter :: module()}).
 
 -record(sink, {name     :: atom(),
diff -r -u ns_server.orig/deps/ns_babysitter/src/ns_crash_log.erl ns_server/deps/ns_babysitter/src/ns_crash_log.erl
--- ns_server.orig/deps/ns_babysitter/src/ns_crash_log.erl  2015-10-08 11:31:20.540433000 +0200
+++ ns_server/deps/ns_babysitter/src/ns_crash_log.erl   2015-10-07 22:21:45.292975000 +0200
@@ -13,9 +13,9 @@
 -define(MAX_CRASHES_LEN, 100).
 
 -record(state, {file_path :: file:filename(),
-                crashes :: queue(),
+                crashes :: queue:queue(),
                 crashes_len :: non_neg_integer(),
-                crashes_saved :: queue(),
+                crashes_saved :: queue:queue(),
                 consumer_from = undefined :: undefined | {pid(), reference()},
                 consumer_mref = undefined :: undefined | reference()
                }).
diff -r -u ns_server.orig/include/remote_clusters_info.hrl ns_server/include/remote_clusters_info.hrl
--- ns_server.orig/include/remote_clusters_info.hrl 2015-10-08 11:31:20.544760000 +0200
+++ ns_server/include/remote_clusters_info.hrl  2015-10-07 22:22:48.541494000 +0200
@@ -20,6 +20,6 @@
                         cluster_cert :: binary() | undefined,
                         server_list_nodes :: [#remote_node{}],
                         bucket_caps :: [binary()],
-                        raw_vbucket_map :: dict(),
-                        capi_vbucket_map :: dict(),
+                        raw_vbucket_map :: dict:dict(),
+                        capi_vbucket_map :: dict:dict(),
                         cluster_version :: {integer(), integer()}}).
diff -r -u ns_server.orig/src/auto_failover.erl ns_server/src/auto_failover.erl
--- ns_server.orig/src/auto_failover.erl    2015-10-08 11:31:21.396519000 +0200
+++ ns_server/src/auto_failover.erl 2015-10-08 11:19:43.710301000 +0200
@@ -336,7 +336,7 @@
 %%
 
 %% @doc Returns a list of nodes that should be active, but are not running.
--spec actual_down_nodes(dict(), [atom()], [{atom(), term()}]) -> [atom()].
+-spec actual_down_nodes(dict:dict(), [atom()], [{atom(), term()}]) -> [atom()].
 actual_down_nodes(NodesDict, NonPendingNodes, Config) ->
     % Get all buckets
     BucketConfigs = ns_bucket:get_buckets(Config),
diff -r -u ns_server.orig/src/dcp_upgrade.erl ns_server/src/dcp_upgrade.erl
--- ns_server.orig/src/dcp_upgrade.erl  2015-10-08 11:31:21.400562000 +0200
+++ ns_server/src/dcp_upgrade.erl   2015-10-08 11:19:47.370353000 +0200
@@ -37,7 +37,7 @@
                 num_buckets :: non_neg_integer(),
                 bucket :: bucket_name(),
                 bucket_config :: term(),
-                progress :: dict(),
+                progress :: dict:dict(),
                 workers :: [pid()]}).
 
 start_link(Buckets) ->
diff -r -u ns_server.orig/src/janitor_agent.erl ns_server/src/janitor_agent.erl
--- ns_server.orig/src/janitor_agent.erl    2015-10-08 11:31:21.401859000 +0200
+++ ns_server/src/janitor_agent.erl 2015-10-08 11:18:09.979728000 +0200
@@ -43,7 +43,7 @@
                 rebalance_status = finished :: in_process | finished,
                 replicators_primed :: boolean(),
 
-                apply_vbucket_states_queue :: queue(),
+                apply_vbucket_states_queue :: queue:queue(),
                 apply_vbucket_states_worker :: undefined | pid(),
                 rebalance_subprocesses_registry :: pid()}).
 
diff -r -u ns_server.orig/src/menelaus_web_alerts_srv.erl ns_server/src/menelaus_web_alerts_srv.erl
--- ns_server.orig/src/menelaus_web_alerts_srv.erl  2015-10-08 11:31:21.405690000 +0200
+++ ns_server/src/menelaus_web_alerts_srv.erl   2015-10-08 10:58:15.641331000 +0200
@@ -219,7 +219,7 @@
 
 %% @doc if listening on a non localhost ip, detect differences between
 %% external listening host and current node host
--spec check(atom(), dict(), list(), [{atom(),number()}]) -> dict().
+-spec check(atom(), dict:dict(), list(), [{atom(),number()}]) -> dict:dict().
 check(ip, Opaque, _History, _Stats) ->
     {_Name, Host} = misc:node_name_host(node()),
     case can_listen(Host) of
@@ -290,7 +290,7 @@
 
 %% @doc only check for disk usage if there has been no previous
 %% errors or last error was over the timeout ago
--spec hit_rate_limit(atom(), dict()) -> true | false.
+-spec hit_rate_limit(atom(), dict:dict()) -> true | false.
 hit_rate_limit(Key, Dict) ->
     case dict:find(Key, Dict) of
         error ->
@@ -355,7 +355,7 @@
 
 
 %% @doc list of buckets thats measured stats have increased
--spec stat_increased(dict(), dict()) -> list().
+-spec stat_increased(dict:dict(), dict:dict()) -> list().
 stat_increased(New, Old) ->
     [Bucket || {Bucket, Val} <- dict:to_list(New), increased(Bucket, Val, Old)].
 
@@ -392,7 +392,7 @@
 
 
 %% @doc Lookup old value and test for increase
--spec increased(string(), integer(), dict()) -> true | false.
+-spec increased(string(), integer(), dict:dict()) -> true | false.
 increased(Key, Val, Dict) ->
     case dict:find(Key, Dict) of
         error ->
diff -r -u ns_server.orig/src/misc.erl ns_server/src/misc.erl
--- ns_server.orig/src/misc.erl 2015-10-08 11:31:21.407175000 +0200
+++ ns_server/src/misc.erl  2015-10-08 10:55:15.167246000 +0200
@@ -54,7 +54,7 @@
 randomize() ->
     case get(random_seed) of
         undefined ->
-            random:seed(erlang:now());
+            random:seed(erlang:timestamp());
         _ ->
             ok
     end.
@@ -303,8 +303,8 @@
 
 position(E, [_|List], N) -> position(E, List, N+1).
 
-now_int()   -> time_to_epoch_int(now()).
-now_float() -> time_to_epoch_float(now()).
+now_int()   -> time_to_epoch_int(erlang:timestamp()).
+now_float() -> time_to_epoch_float(erlang:timestamp()).
 
 time_to_epoch_int(Time) when is_integer(Time) or is_float(Time) ->
   Time;
@@ -1239,7 +1239,7 @@
 
 
 %% Get an item from from a dict, if it doesnt exist return default
--spec dict_get(term(), dict(), term()) -> term().
+-spec dict_get(term(), dict:dict(), term()) -> term().
 dict_get(Key, Dict, Default) ->
     case dict:is_key(Key, Dict) of
         true -> dict:fetch(Key, Dict);
diff -r -u ns_server.orig/src/ns_doctor.erl ns_server/src/ns_doctor.erl
--- ns_server.orig/src/ns_doctor.erl    2015-10-08 11:31:21.410269000 +0200
+++ ns_server/src/ns_doctor.erl 2015-10-08 10:53:49.208657000 +0200
@@ -30,8 +30,8 @@
          get_tasks_version/0, build_tasks_list/2]).
 
 -record(state, {
-          nodes :: dict(),
-          tasks_hash_nodes :: undefined | dict(),
+          nodes :: dict:dict(),
+          tasks_hash_nodes :: undefined | dict:dict(),
           tasks_hash :: undefined | integer(),
           tasks_version :: undefined | string()
          }).
@@ -112,14 +112,14 @@
     RV = case dict:find(Node, Nodes) of
              {ok, Status} ->
                  LiveNodes = [node() | nodes()],
-                 annotate_status(Node, Status, now(), LiveNodes);
+                 annotate_status(Node, Status, erlang:timestamp(), LiveNodes);
              _ ->
                  []
          end,
     {reply, RV, State};
 
 handle_call(get_nodes, _From, #state{nodes=Nodes} = State) ->
-    Now = erlang:now(),
+    Now = erlang:timestamp(),
     LiveNodes = [node()|nodes()],
     Nodes1 = dict:map(
                fun (Node, Status) ->
@@ -210,7 +210,7 @@
         orelse OldReadyBuckets =/= NewReadyBuckets.
 
 update_status(Name, Status0, Dict) ->
-    Status = [{last_heard, erlang:now()} | Status0],
+    Status = [{last_heard, erlang:timestamp()} | Status0],
     PrevStatus = case dict:find(Name, Dict) of
                      {ok, V} -> V;
                      error -> []
diff -r -u ns_server.orig/src/ns_janitor_map_recoverer.erl ns_server/src/ns_janitor_map_recoverer.erl
--- ns_server.orig/src/ns_janitor_map_recoverer.erl 2015-10-08 11:31:21.410945000 +0200
+++ ns_server/src/ns_janitor_map_recoverer.erl  2015-10-08 10:52:23.927033000 +0200
@@ -79,7 +79,7 @@
     end.
 
 -spec recover_map([{non_neg_integer(), node()}],
-                  dict(),
+                  dict:dict(),
                   boolean(),
                   non_neg_integer(),
                   pos_integer(),
diff -r -u ns_server.orig/src/ns_memcached.erl ns_server/src/ns_memcached.erl
--- ns_server.orig/src/ns_memcached.erl 2015-10-08 11:31:21.411920000 +0200
+++ ns_server/src/ns_memcached.erl  2015-10-08 10:51:08.281320000 +0200
@@ -65,9 +65,9 @@
           running_very_heavy = 0,
           %% NOTE: otherwise dialyzer seemingly thinks it's possible
           %% for queue fields to be undefined
-          fast_calls_queue = impossible :: queue(),
-          heavy_calls_queue = impossible :: queue(),
-          very_heavy_calls_queue = impossible :: queue(),
+          fast_calls_queue = impossible :: queue:queue(),
+          heavy_calls_queue = impossible :: queue:queue(),
+          very_heavy_calls_queue = impossible :: queue:queue(),
           status :: connecting | init | connected | warmed,
           start_time::tuple(),
           bucket::nonempty_string(),
diff -r -u ns_server.orig/src/ns_orchestrator.erl ns_server/src/ns_orchestrator.erl
--- ns_server.orig/src/ns_orchestrator.erl  2015-10-08 11:31:21.412957000 +0200
+++ ns_server/src/ns_orchestrator.erl   2015-10-08 10:45:51.967739000 +0200
@@ -251,7 +251,7 @@
                             not_needed |
                             {error, {failed_nodes, [node()]}}
   when UUID :: binary(),
-       RecoveryMap :: dict().
+       RecoveryMap :: dict:dict().
 start_recovery(Bucket) ->
     wait_for_orchestrator(),
     gen_fsm:sync_send_event(?SERVER, {start_recovery, Bucket}).
@@ -260,7 +260,7 @@
   when Status :: [{bucket, bucket_name()} |
                   {uuid, binary()} |
                   {recovery_map, RecoveryMap}],
-       RecoveryMap :: dict().
+       RecoveryMap :: dict:dict().
 recovery_status() ->
     case is_recovery_running() of
         false ->
@@ -271,7 +271,7 @@
     end.
 
 -spec recovery_map(bucket_name(), UUID) -> bad_recovery | {ok, RecoveryMap}
-  when RecoveryMap :: dict(),
+  when RecoveryMap :: dict:dict(),
        UUID :: binary().
 recovery_map(Bucket, UUID) ->
     wait_for_orchestrator(),
@@ -1062,7 +1062,7 @@
             {next_state, FsmState, State#janitor_state{remaining_buckets = NewBucketRequests}}
     end.
 
--spec update_progress(dict()) -> ok.
+-spec update_progress(dict:dict()) -> ok.
 update_progress(Progress) ->
     gen_fsm:send_event(?SERVER, {update_progress, Progress}).
 
diff -r -u ns_server.orig/src/ns_replicas_builder.erl ns_server/src/ns_replicas_builder.erl
--- ns_server.orig/src/ns_replicas_builder.erl  2015-10-08 11:31:21.413763000 +0200
+++ ns_server/src/ns_replicas_builder.erl   2015-10-08 10:43:03.655761000 +0200
@@ -153,7 +153,7 @@
             observe_wait_all_done_old_style_loop(Bucket, SrcNode, Sleeper, NewTapNames, SleepsSoFar+1)
     end.
 
--spec filter_true_producers(list(), set(), binary()) -> [binary()].
+-spec filter_true_producers(list(), set:set(), binary()) -> [binary()].
 filter_true_producers(PList, TapNamesSet, StatName) ->
     [TapName
      || {<<"eq_tapq:replication_", Key/binary>>, <<"true">>} <- PList,
diff -r -u ns_server.orig/src/ns_vbucket_mover.erl ns_server/src/ns_vbucket_mover.erl
--- ns_server.orig/src/ns_vbucket_mover.erl 2015-10-08 11:31:21.415305000 +0200
+++ ns_server/src/ns_vbucket_mover.erl  2015-10-08 10:42:02.815008000 +0200
@@ -36,14 +36,14 @@
 
 -export([inhibit_view_compaction/3]).
 
--type progress_callback() :: fun((dict()) -> any()).
+-type progress_callback() :: fun((dict:dict()) -> any()).
 
 -record(state, {bucket::nonempty_string(),
                 disco_events_subscription::pid(),
-                map::array(),
+                map::array:array(),
                 moves_scheduler_state,
                 progress_callback::progress_callback(),
-                all_nodes_set::set(),
+                all_nodes_set::set:set(),
                 replication_type::bucket_replication_type()}).
 
 %%
@@ -218,14 +218,14 @@
 
 %% @private
 %% @doc Convert a map array back to a map list.
--spec array_to_map(array()) -> vbucket_map().
+-spec array_to_map(array:array()) -> vbucket_map().
 array_to_map(Array) ->
     array:to_list(Array).
 
 %% @private
 %% @doc Convert a map, which is normally a list, into an array so that
 %% we can randomly access the replication chains.
--spec map_to_array(vbucket_map()) -> array().
+-spec map_to_array(vbucket_map()) -> array:array().
 map_to_array(Map) ->
     array:fix(array:from_list(Map)).
 
diff -r -u ns_server.orig/src/path_config.erl ns_server/src/path_config.erl
--- ns_server.orig/src/path_config.erl  2015-10-08 11:31:21.415376000 +0200
+++ ns_server/src/path_config.erl   2015-10-08 10:38:48.687500000 +0200
@@ -53,7 +53,7 @@
     filename:join(component_path(NameAtom), SubPath).
 
 tempfile(Dir, Prefix, Suffix) ->
-    {_, _, MicroSecs} = erlang:now(),
+    {_, _, MicroSecs} = erlang:timestamp(),
     Pid = os:getpid(),
     Filename = Prefix ++ integer_to_list(MicroSecs) ++ "_" ++
                Pid ++ Suffix,
diff -r -u ns_server.orig/src/recoverer.erl ns_server/src/recoverer.erl
--- ns_server.orig/src/recoverer.erl    2015-10-08 11:31:21.415655000 +0200
+++ ns_server/src/recoverer.erl 2015-10-08 10:36:46.182185000 +0200
@@ -23,16 +23,16 @@
          is_recovery_complete/1]).
 
 -record(state, {bucket_config :: list(),
-                recovery_map :: dict(),
-                post_recovery_chains :: dict(),
-                apply_map :: array(),
-                effective_map :: array()}).
+                recovery_map :: dict:dict(),
+                post_recovery_chains :: dict:dict(),
+                apply_map :: array:array(),
+                effective_map :: array:array()}).
 
 -spec start_recovery(BucketConfig) ->
                             {ok, RecoveryMap, {Servers, BucketConfig}, #state{}}
                                 | not_needed
   when BucketConfig :: list(),
-       RecoveryMap :: dict(),
+       RecoveryMap :: dict:dict(),
        Servers :: [node()].
 start_recovery(BucketConfig) ->
     NumVBuckets = proplists:get_value(num_vbuckets, BucketConfig),
@@ -92,7 +92,7 @@
                     effective_map=array:from_list(OldMap)}}
     end.
 
--spec get_recovery_map(#state{}) -> dict().
+-spec get_recovery_map(#state{}) -> dict:dict().
 get_recovery_map(#state{recovery_map=RecoveryMap}) ->
     RecoveryMap.
 
@@ -205,7 +205,7 @@
 -define(MAX_NUM_SERVERS, 50).
 
 compute_recovery_map_test_() ->
-    random:seed(now()),
+    random:seed(erlang:timestamp()),
 
     {timeout, 100,
      {inparallel,
diff -r -u ns_server.orig/src/remote_clusters_info.erl ns_server/src/remote_clusters_info.erl
--- ns_server.orig/src/remote_clusters_info.erl 2015-10-08 11:31:21.416143000 +0200
+++ ns_server/src/remote_clusters_info.erl  2015-10-08 10:37:57.095653000 +0200
@@ -121,10 +121,10 @@
           {node, node(), remote_clusters_info_config_update_interval}, 10000)).
 
 -record(state, {cache_path :: string(),
-                scheduled_config_updates :: set(),
-                remote_bucket_requests :: dict(),
-                remote_bucket_waiters :: dict(),
-                remote_bucket_waiters_trefs :: dict()}).
+                scheduled_config_updates :: set:set(),
+                remote_bucket_requests :: dict:dict(),
+                remote_bucket_waiters :: dict:dict(),
+                remote_bucket_waiters_trefs :: dict:dict()}).
 
 start_link() ->
     gen_server:start_link({local, ?MODULE}, ?MODULE, [], []).
diff -r -u ns_server.orig/src/ringbuffer.erl ns_server/src/ringbuffer.erl
--- ns_server.orig/src/ringbuffer.erl   2015-10-08 11:31:21.416440000 +0200
+++ ns_server/src/ringbuffer.erl    2015-10-08 10:33:36.063532000 +0200
@@ -18,7 +18,7 @@
 -export([new/1, to_list/1, to_list/2, to_list/3, add/2]).
 
 % Create a ringbuffer that can hold at most Size items.
--spec new(integer()) -> queue().
+-spec new(integer()) -> queue:queue().
 new(Size) ->
     queue:from_list([empty || _ <- lists:seq(1, Size)]).
 
@@ -26,15 +26,15 @@
 % Convert the ringbuffer to a list (oldest items first).
 -spec to_list(integer()) -> list().
 to_list(R) -> to_list(R, false).
--spec to_list(queue(), W) -> list() when is_subtype(W, boolean());
-             (integer(), queue()) -> list().
+-spec to_list(queue:queue(), W) -> list() when is_subtype(W, boolean());
+             (integer(), queue:queue()) -> list().
 to_list(R, WithEmpties) when is_boolean(WithEmpties) ->
     queue:to_list(to_queue(R));
 
 % Get at most the N newest items from the given ringbuffer (oldest first).
 to_list(N, R) -> to_list(N, R, false).
 
--spec to_list(integer(), queue(), boolean()) -> list().
+-spec to_list(integer(), queue:queue(), boolean()) -> list().
 to_list(N, R, WithEmpties) ->
     L =  lists:reverse(queue:to_list(to_queue(R, WithEmpties))),
     lists:reverse(case (catch lists:split(N, L)) of
@@ -43,14 +43,14 @@
                   end).
 
 % Add an element to a ring buffer.
--spec add(term(), queue()) -> queue().
+-spec add(term(), queue:queue()) -> queue:queue().
 add(E, R) ->
     queue:in(E, queue:drop(R)).
 
 % private
--spec to_queue(queue()) -> queue().
+-spec to_queue(queue:queue()) -> queue:queue().
 to_queue(R) -> to_queue(R, false).
 
--spec to_queue(queue(), boolean()) -> queue().
+-spec to_queue(queue:queue(), boolean()) -> queue:queue().
 to_queue(R, false) -> queue:filter(fun(X) -> X =/= empty end, R);
 to_queue(R, true) -> R.
diff -r -u ns_server.orig/src/vbucket_map_mirror.erl ns_server/src/vbucket_map_mirror.erl
--- ns_server.orig/src/vbucket_map_mirror.erl   2015-10-08 11:31:21.417885000 +0200
+++ ns_server/src/vbucket_map_mirror.erl    2015-10-07 22:27:21.036638000 +0200
@@ -119,7 +119,7 @@
       end).
 
 -spec node_vbuckets_dict_or_not_present(bucket_name()) ->
-                                               dict() | no_map | not_present.
+                                               dict:dict() | no_map | not_present.
 node_vbuckets_dict_or_not_present(BucketName) ->
     case ets:lookup(vbucket_map_mirror, BucketName) of
         [] ->
diff -r -u ns_server.orig/src/vbucket_move_scheduler.erl ns_server/src/vbucket_move_scheduler.erl
--- ns_server.orig/src/vbucket_move_scheduler.erl   2015-10-08 11:31:21.418054000 +0200
+++ ns_server/src/vbucket_move_scheduler.erl    2015-10-07 22:26:10.523913000 +0200
@@ -128,7 +128,7 @@
           backfills_limit :: non_neg_integer(),
           moves_before_compaction :: non_neg_integer(),
           total_in_flight = 0 :: non_neg_integer(),
-          moves_left_count_per_node :: dict(), % node() -> non_neg_integer()
+          moves_left_count_per_node :: dict:dict(), % node() -> non_neg_integer()
           moves_left :: [move()],
 
           %% pending moves when current master is undefined For them
@@ -136,13 +136,13 @@
           %% And that's first moves that we ever consider doing
           moves_from_undefineds :: [move()],
 
-          compaction_countdown_per_node :: dict(), % node() -> non_neg_integer()
-          in_flight_backfills_per_node :: dict(),  % node() -> non_neg_integer() (I.e. counts current moves)
-          in_flight_per_node :: dict(),            % node() -> non_neg_integer() (I.e. counts current moves)
-          in_flight_compactions :: set(),          % set of nodes
+          compaction_countdown_per_node :: dict:dict(), % node() -> non_neg_integer()
+          in_flight_backfills_per_node :: dict:dict(),  % node() -> non_neg_integer() (I.e. counts current moves)
+          in_flight_per_node :: dict:dict(),            % node() -> non_neg_integer() (I.e. counts current moves)
+          in_flight_compactions :: set:set(),          % set of nodes
 
-          initial_move_counts :: dict(),
-          left_move_counts :: dict(),
+          initial_move_counts :: dict:dict(),
+          left_move_counts :: dict:dict(),
           inflight_moves_limit :: non_neg_integer()
          }).
 
diff -r -u ns_server.orig/src/xdc_vbucket_rep_xmem.erl ns_server/src/xdc_vbucket_rep_xmem.erl
--- ns_server.orig/src/xdc_vbucket_rep_xmem.erl 2015-10-08 11:31:21.419959000 +0200
+++ ns_server/src/xdc_vbucket_rep_xmem.erl  2015-10-07 22:24:35.829228000 +0200
@@ -134,7 +134,7 @@
     end.
 
 %% internal
--spec categorise_statuses_to_dict(list(), list()) -> {dict(), dict()}.
+-spec categorise_statuses_to_dict(list(), list()) -> {dict:dict(), dict:dict()}.
 categorise_statuses_to_dict(Statuses, MutationsList) ->
     {ErrorDict, ErrorKeys, _}
         = lists:foldl(fun(Status, {DictAcc, ErrorKeyAcc, CountAcc}) ->
@@ -164,7 +164,7 @@
                       lists:reverse(Statuses)),
     {ErrorDict, ErrorKeys}.
 
--spec lookup_error_dict(term(), dict()) -> integer().
+-spec lookup_error_dict(term(), dict:dict()) -> integer().
 lookup_error_dict(Key, ErrorDict)->
      case dict:find(Key, ErrorDict) of
          error ->
@@ -173,7 +173,7 @@
              V
      end.
 
--spec convert_error_dict_to_string(dict()) -> list().
+-spec convert_error_dict_to_string(dict:dict()) -> list().
 convert_error_dict_to_string(ErrorKeyDict) ->
     StrHead = case dict:size(ErrorKeyDict) > 0 of
                   true ->

Next!

# gmake
[no errors]

There are no next error. Let's see the destination folder:

# ll install/
total 32
drwxr-xr-x  3 root  wheel  1536 Oct  8 11:21 bin/
drwxr-xr-x  2 root  wheel   512 Oct  8 11:21 doc/
drwxr-xr-x  5 root  wheel   512 Oct  8 11:21 etc/
drwxr-xr-x  6 root  wheel  1024 Oct  8 11:21 lib/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 man/
drwxr-xr-x  2 root  wheel   512 Oct  8 11:21 samples/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 share/
drwxr-xr-x  3 root  wheel   512 Oct  8 11:21 var/

Sucess. Couchbase is built.

Running the server

# bin/couchbase-server
bin/couchbase-server: Command not found.

What the hell is that file?

root@couchbasebsd:~/couchbase # head bin/couchbase-server
#! /bin/bash
#
# Copyright (c) 2010-2011, Couchbase, Inc.
# All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0

Yum. Sweet bash. Let's dirty our system a bit:

# ln -s /usr/local/bin/bash /bin/bash

Let's try running the server again:

# bin/couchbase-server
Erlang/OTP 18 [erts-7.0.1] [source] [64-bit] [smp:2:2] [async-threads:16] [hipe] [kernel-poll:false]

Eshell V7.0.1  (abort with ^G)

Nothing complained. Let's see if the web UI is present.

# sockstat -4l | grep 8091
#

It's not.

What's in the log?

[user:critical,2015-10-08T11:42:24.599,ns_1@127.0.0.1:ns_server_sup<0.271.0>:menelaus_sup:start_link:51]Couchbase Server has failed to start on web port 8091 on node 'ns_1@127.0.0.1'. Perhaps another process has taken port 8091 already? If so, please stop that process first before trying again.
[ns_server:info,2015-10-08T11:42:24.600,ns_1@127.0.0.1:mb_master<0.319.0>:mb_master:terminate:299]Synchronously shutting down child mb_master_sup

Server could not start. Let's see more logs:

# /root/couchbase/install/bin/cbbrowse_logs
[...]
[error_logger:error,2015-10-08T11:57:36.860,ns_1@127.0.0.1:error_logger<0.6.0>:ale_error_logger_handler:do_log:203]
=========================SUPERVISOR REPORT=========================
     Supervisor: {local,ns_ssl_services_sup}
     Context:    start_error
     Reason:     {bad_generate_cert_exit,1,<<>>}
     Offender:   [{pid,undefined},
                  {id,ns_ssl_services_setup},
                  {mfargs,{ns_ssl_services_setup,start_link,[]}},
                  {restart_type,permanent},
                  {shutdown,1000},
                  {child_type,worker}]

bad_generate_cert_exit? Let's execute that program ourselves:

# bin/generate_cert
ELF binary type "0" not known.
bin/generate_cert: Exec format error. Binary file not executable.

# file bin/generate_cert
bin/generate_cert: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), statically linked, BuildID[md5/uuid]=48f74c5e6c624dfe8ecba6d8687f151b, not stripped

Nothing beats building some software on FreeBSD and ending up with Linux binaries.

Where is the source of that program?

 # find . -name '*generate_cert*'
./ns_server/deps/generate_cert
./ns_server/deps/generate_cert/generate_cert.go
./ns_server/priv/i386-darwin-generate_cert
./ns_server/priv/i386-linux-generate_cert
./ns_server/priv/i386-win32-generate_cert.exe
./install/bin/generate_cert

Let's build it and replace the Linux one.

# cd ns_server/deps/generate_cert/
# go build
# file generate_cert
generate_cert: ELF 64-bit LSB executable, x86-64, version 1 (FreeBSD), dynamically linked, interpreter /libexec/ld-elf.so.1, not stripped
# cp generate_cert ../../../install/bin/

# cd ../gozip/
# go build
# cp gozip ../../../install/bin

Let's run the server again.

# # bin/couchbase-server
Erlang/OTP 18 [erts-7.0.1] [source] [64-bit] [smp:2:2] [async-threads:16] [hipe] [kernel-poll:false]

Eshell V7.0.1  (abort with ^G)

# sockstat -4l | grep 8091
root     beam.smp   93667 39 tcp4   *:8091                *:*

This time the web UI is present.

The first page you see when you installed a couchbase server

Let's follow the setup and create a bucket.

Couchbase cluster overview

So far, everything seems to be working.

Interracting with the server via the CLI is also working.

# bin/couchbase-cli bucket-list -u Administrator -p abcd1234 -c 127.0.0.1
default
 bucketType: membase
 authType: sasl
 saslPassword:
 numReplicas: 1
 ramQuota: 507510784
 ramUsed: 31991104
test_bucket
 bucketType: membase
 authType: sasl
 saslPassword:
 numReplicas: 1
 ramQuota: 104857600
 ramUsed: 31991008

Conclusion

I obviously haven't tested every feature of the server, but as far as I demonstrated, it's perfectly capable of running on FreeBSD.

GLIB header files do not match library version

Not so frequently asked questions and stuff: 

The FreeBSD logoImage

Let's try to build graphics/gdk-pixbuf2 on FreeBSD:

# make -C /usr/ports/graphics/gdk-pixbuf2 install clean
[...]
checking for GLIB - version >= 2.37.6... *** GLIB header files (version 2.36.3) do not match
*** library (version 2.44.1)
no
configure: error:
*** GLIB 2.37.6 or better is required. The latest version of
*** GLIB is always available from ftp://ftp.gtk.org/pub/gtk/.
===>  Script "configure" failed unexpectedly.
Please report the problem to gnome@FreeBSD.org [maintainer] and attach the
"/usr/ports/graphics/gdk-pixbuf2/work/gdk-pixbuf-2.32.1/config.log" including
the output of the failure of your make command. Also, it might be a good idea
to provide an overview of all packages installed on your system (e.g. a
/usr/local/sbin/pkg-static info -g -Ea).
*** Error code 1

Stop.
make[1]: stopped in /usr/ports/graphics/gdk-pixbuf2
*** Error code 1

Stop.
make: stopped in /usr/ports/graphics/gdk-pixbuf2
# pkg info | grep glib
glib-2.44.1_1                  Some useful routines of C programming (current stable version)

Huh? Okay. I was pretty sure 2.44 > 2.37.

Who does that check?

# grep -R 'GLIB header files' *
work/gdk-pixbuf-2.32.1/aclocal.m4:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/configure:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/config.log:|       printf("*** GLIB header files (version %d.%d.%d) do not match\n",
work/gdk-pixbuf-2.32.1/configure.libtool.bak:      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
# cat work/gdk-pixbuf-2.32.1/aclocal.m4
[...]
  else if ((glib_major_version != GLIB_MAJOR_VERSION) ||
           (glib_minor_version != GLIB_MINOR_VERSION) ||
           (glib_micro_version != GLIB_MICRO_VERSION))
    {
      printf("*** GLIB header files (version %d.%d.%d) do not match\n",
             GLIB_MAJOR_VERSION, GLIB_MINOR_VERSION, GLIB_MICRO_VERSION);
      printf("*** library (version %d.%d.%d)\n",
             glib_major_version, glib_minor_version, glib_micro_version);
    }
[...]

Where are those constants defined?

# grep -R -A 2 GLIB_MAJOR_VERSION /usr/local/include/*
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MAJOR_VERSION 2
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MINOR_VERSION 36
/usr/local/include/glib-2.0/glibconfig.h:#define GLIB_MICRO_VERSION 3

Where does this file come from?

# pkg which /usr/local/include/glib-2.0/glibconfig.h
/usr/local/include/glib-2.0/glibconfig.h was not found in the database

Nowhere.

Did the port even installed it?

# grep glibconfig.h /usr/ports/devel/glib20/pkg-plist
lib/glib-2.0/include/glibconfig.h

No.

Let's delete it and rebuild the port just in case.

# rm /usr/local/include/glib-2.0/glibconfig.h
# make -C /usr/ports/devel/glib20 reinstall clean

Let's try our build again.

# make -C /usr/ports/graphics/gdk-pixbuf2 install clean
[...]
checking for GLIB - version >= 2.37.6... yes (version 2.44.1)
[...]
===>  Checking if gdk-pixbuf2 already installed
===>   Registering installation for gdk-pixbuf2-2.32.1
Installing gdk-pixbuf2-2.32.1...
===>  Cleaning for gdk-pixbuf2-2.32.1

Success!

Pages

Subscribe to FreeBSD and Linux