Attending FOSDEM 2014 in Brussels

FODEM opening keynote

On the 1st & 2nd of February 2014, Brussels became the place where to learn and to train for a plethora of tech-savvy women, men and robots. It was all about this year's new edition of FOSDEM, which took place once again at the ULB (Université Libre de Bruxelles) Solbosch Campus for openness's sake. Since I really enjoyed spending these two days there, surrounded by thousands of geeks litterally, I've decided to write this article to encourage the readers to plan the trip with enough time upfront for the upcoming editions (especially as access to all tracks is free as in free beer).

With over 7k visitors attending half of a thousand tracks, the IPv6 installation was totally worthwhile and held solid until the end. Seriously, when was the last time, you didn't have irreversible issues with wireless networks when attending some event? Though I personally had no choice in connecting to the double stack network from time to time, the sole availability of the latter is a perfect reminder of all the care, the volunteers donated all along these two days. The huge success reached on this week end also proves how institutions and public commonalities can be put to use in developing even more synergy within existing and yet-to-be-built open source project communities.

Security

By pointing me at an article posted on the Tor project blog few months ago, Nicolas Bazire drew my attention to deterministic builds. One of the many risks incurred by core OSS project contributors, would be to have a single compromised machine, becoming susceptible of infecting millions of others in a snowball effect. Jérémy Bobbio (Debian maintainer) decided to work from there with a clear mission statement asserting that "It should be possible to reproduce, byte for byte, every build of every package in Debian".

This is what led me to one of the first sessions in the distributions devroom of this Saturday morning, where Jérémy Bobbio a.k.a. Lunar at debian dot org, explained how this goal could be achieved by

  • recording the build environment,
  • reproducing the build environment,
  • eliminating unneeded variations.

    Reproducible builds

Provided that most of the unneeded variations originate from

  • timestamps,
  • build path,
  • hostname, username, uname,
  • files list order.

The beginning of a solution would consist in

  • sharing a standard virtual machine,
  • installing packages from snapshots.debian.org
  • applying plenty of other tricks as described on the reference wiki like
    • passing "--enable-deterministic-archives" to binutils,
    • timestamps with environment variables,
    • patching gzip,
    • taking benefit of stable file list in archives.

By the time of the conference, Lunar had achieved a 62% success rate with 3196 packages out of 5151 which can be build in a deterministic way i.e. with same checksums obtained for same binaries. He insisted on the fact this can only be part of the solution and not a unique solution to improve the overall security of the build toolchain. Lunar also called for contributors (targeting the PHP registry among many others for Debian) to extend the current coverage, as himself started to work on this project at DebConf2013, which now requires more hands. Anybody interested in security in general, closely related to Debian and other distributions or willing to contribute, can subscribe to the official Reproducible builds mailing list.

Mozilla Track

In the Mozilla track, Srikar Ananthula described how Mozilla Persona can be a better way to sign-in with no need to store passwords, nor to rely on third-parties. Mozilla WebMakers is already relying on it and lots of existing libraries and plugins already implement the Browser Id protocol.

State of Firefox OS

Fabien Cazenave shedded some lights on The state of Firefox OS. According to Kazé, the days of "Flash and Retry" experience are over (mostly). Now that a healthy 12 week release cycle has been established, new web APIs are coming with: * DataStore API * Shared workers

A reference device (InFocus 10″, 1280×800x24bit, 16GB Storage, 2GB RAM) has been selected to build FirefoxOS on tablets. As a result, developers will very soon be able to sign up for participating to the contribution program.

Legal and policy issues

Besides, I had the pleasure to attend my first conference on legal and policy issues with John Sullivan (executive director of the free software foundation) with "JavaScript, If you love it, set it free". After recalling some of the biggest recent successes of the foundation (sales of the Gluglug X60 as one of the computers being endorsed to respect our freedom), he proposed a couple of implementations to address the issue of licencing JavaScript code served by websites. The basic freedom checklist requires

  • licence notice and possibly a copy of the free licence,
  • complete source code (i.e. preferred form of a program for modification).

Privacy

People awaiting to enter the developer room

Have you ever been looking to replace your current webmail? Bjarni R. Einarsson may have a more than decent solution. As highlighted in his mailpile introduction, the state of e-mail was kind of stuck in the 90s. Provided cloudy email is worse for freedom than closed source, mailpile team decided to focus on

  • making software, FOSS folks enjoy
    • hacking on,
    • want to use ;
  • making e-mail encryption understandable,
  • make decentralization easy,
  • find better business models for e-mail.

A very neat web interface has been crafted by MailPile team. Mailpile alpha version has been released with PGP encryption and signatures and search engines. Moreover, developers can already play around with a REST API and a command-line interface. Multiple mailbox formats are supported. Spam filters learn from manually tagged emails, messages the user reads, replies to or forwards.

Graph databases

In the morning of the following day, Armando Miraglia (known as a commiter of sshguards) demonstrated the new Giraph APIs for Python, Rexster and Gora. Giraph is used for large scale processing with Hadoop and it is an open-source implementation of Pregel. Armando's fork supports writing user logic in languages other than Java such as Python.

Davy Suvee illustrated The power of graphs to analyze biological data with an exploration platform (BRAIN). This platform comes as a stack built to relate 23 million biomedical articles and it can be separated into three distinct layers with:

  • meta-data stored with MongoDB,
  • graphs persisted into Neo4J
  • end-user interface built with Swing

JavaScript

Entering the JavaScript devroom was not trivial but I eventually managed to get a seat there. From what I have heard, it was the first time, JavaScript has its fully dedicated room at FOSDEM appart from the Mozilla tracks. Robert Kowalski mentioned useful subcommands to execute with npm like config, repo, outdated. In his talk entitled "Hidden gems in npm", he revealed npmd in particular as it provides:

  • offline search,
  • offline publishing with queuing,
  • full caching of installed modules.

Afterwards, Laurent Eschenauer showcased how to pilot AR.Drone in JavaScript. The main principles behind the drones flight automation were exposed:

  • Remote control API,
  • State estimation (or figuring where a drone is),
  • Motion control with PID,
  • Path planning.

Testing and automation

Last but not least, Sebastian Bergmann was a speaker at FOSDEM for the first time in the testing and automation developer room. He told us the story of "Pride and Prejudice: Testing the PHP World" or introduced how he gave birth to PHPUnit application out of pain - by delegating the latter to machines using automation.

Testing and automation

Hopefully, we will have the opportunity to attend the next series of conference at FOSDEM 2015 and we might even meet with some of you there in a crowded devroom.

Disclaimer: This article was originally published on Theodo blog

Build a zen command-line environment

... with Git, Tmux, Oh My Zsh, Mosh (and Docker)

TL;DR

This article aims at learning how to install lovely command-line tools in a recoverable way. You'll be able to multiplex zsh terminals over a UDP connexion (using mobile shell):

Zen Command-Line with tmux oh-my-zh

The reader is strongly encouraged to browse thoroughly the Dockerfile put in reference, which steps summarize how to build the premises of such a text-only environment.

Disclaimer: To some extent, you might feel a bit dizzy because of the specially crafted mise en abyme. The dizziness is a typical side effect of linux container abuse... No worries, the feeling will just vanish with time (or you might just end up killing the wrong processes).

Dot files

Each and every user of linux distributions (or similarly flavoured operating systems) might take a minute or two to acknowledge the significance of their own dot_files.

Even though they are hidden by design, I believe our productivity directly depends on the care they receive from us.

We all have heard (if not even worse) of hard drives just dying in some random boxes. Still feeling a bit sceptical? Pretty numbers have been published on Backblaze blog just to satisfy our curiosity. Being a big fan of upcycling doesn't strictly imply there could be some happy ending for few choosen hard drives, anyway.

The bottom line is, the more precious and rapidly changing our data feel and the more regular we shall have backup for them ready to be restored. I insist on the latter part as knowing precisely how to restore backups is the only way to really feel confident about them.

Let us see how to proceed in order to get things done i.e. hidden dot files safe.

Experimenting with Docker

The experiment is bootstrapped with the now classic installation of vagrant and virtual box.

Installing Virtual box

For instance, in a box running Wheezy 7.3, we would execute the commands:

# Add a GPG key downloaded from virtual box official website
wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- | sudo apt-key add -

# Download package lists from repositories
sudo apt-get update

# Install virtualbox
sudo apt-get install virtualbox-4.

Binaries for other operating systems can be fetched from the official VirtualBox download page.

Installing Vagrant

The Vagrant download page offers 64-bit installation package for Debian:

# Install dependencies to share folders with NFS
sudo apt-get install nfs-kernel-server nfs-command

# Download Vagrant installation package
wget https://dl.bintray.com/mitchellh/vagrant/vagrant_1.4.3_x86_64.deb -O /tmp/vagrant_1.4.3_x86_64.deb

# Install Vagrant
sudo dpkg -i /tmp/vagrant_1.4.3_x86_64.deb
Installing Docker
# Clone the linux container engine repository (we assume Git has already been installed in the host)
git clone https://github.com/dotcloud/docker.git && cd docker

# Run Vagrant
vagrant up

# Access the vagrant box provided by DotCloud to use docker from their official box
vagrant ssh
Customizing our shell
# Install git, vim and mobile shell in the vagrant box
sudo apt-get install git vim mosh

# Clone the repository containing a Dockerfile
git clone https://github.com/thierrymarianne/zen-cmd.git zen-cmd

# Run the Dockerfile from the newly cloned repository and tag it as "zen-cmd"
cd zen-cmd && docker build . -t zen-cmd

From this point, one shall have received a positive message (Successfully built) well accompanied by a pretty hash. These 12 characters are to be kept preciously.

Believe it or not, we are done already here or better said, our personal shell has been set up according to our container building script (Dockerfile). The pretty hash identifies a docker image which can now be run in order to use our command-line interface.

In a nutshell,

  • Installing a custom-tailored command-line environment only took us the time of making some notes in the shape of a Dockerfile about what needs to be customized.
  • Executing a single command was enough to restore our command-line environment personalized over time

Fellows of little faith are absolutely right in showing doubts about this so let us run the interactive shell (within a container within a vagrant box), in order to proceed, one just needs to execute the following commands

# Copy predefined SSH configuration from the article repository
cp -R ./ssh/* /home/vagrant/.ssh

# Start an ssh agent
ssh-agent /bin/bash

# Let our ssh agent handling a key allowed to access the container
# with passphrase being "elvish_word_for_friend"
# (One shall certainly generate its own pair of keys
# using `ssh-keygen -f path_to_private_key` otherwise)
chmod 0400 ~/.ssh/zen-cmd && ssh-add ~/.ssh/zen-cmd

# Run openssh daemon from our brand new container identified by a tag
docker run -d -t zen-cmd /usr/sbin/sshd -D & export LC_ALL=en_US.UTF-8 && /usr/bin/mosh-server new -p 6000

# Alias container ip address to "zen-cmd" host
sudo /bin/bash -c "echo `docker inspect zen-cmd | grep IPAddress | cut -d '\"' -f 4`'    zen-cmd' >> /etc/hosts"

# Access our portable command-line environment using mosh-client
mosh zen-cmd

# In the container, run tmux to multiplex zsh terminals
tmux

Let us dive into the details of this automated script :

  • Our package lists are updated (the same way we did before Installing VirtualBox).
  • Packages needed to compile binaries are installed
  • Target directories are created respectively to
    • clone sources
    • install binaries from sources
  • Git is installed
  • Tmux, oh-my-zsh repositories are cloned
  • Tmux, oh-my-zsh are installed and configured
  • Zsh is set as default shell
  • Password authentication is disabled for ssh
  • Mosh server is installed
  • UTF-8 locale required to run mobile shell is generated
  • Privilege separation directory is created for ssh
  • Our vagrant ssh public key is added to authorized keys of our container
  • SSH and Mobile shell ports are opened

Leveraging git

Since we mostly deal with plain text files here, git appears to be a quite legitimate version control system.

Even GitHub made a point in popularizing the habit of sharing them (the dot_files). What are the direct benefits coming out of it? According to their unofficial guide,

  • boxes are kept in sync
  • technology watch becomes easier
  • knowledge is redistributed

End Of Line

I hope you have enjoyed this setup which has the clear advantages of being portable, testable and recoverable. Syntax to write your own Dockerfile can be found in Docker official documentation.

Disclaimer: This article was originally published on Theodo blog