Gerrit’s REST API

Gerrit has a nice REST API.

I had the need to quickly check if a change was merged or not, it turned out to be quite easy:

while read CHANGE ; do curl "$CHANGE" ; done

That way I just had to paste the change-id into the terminal and quickly got my response.

bookzilla instead of amazon

This is nothing new, actually uncovered last year already. But today I just saw it again, and some aspects are just not what I want to support with my kaufkraft/buying power. And that is actually our - consumers - best weapon: Where we spend our money.

So, time to re-activate my bookzilla account, which is also supporting F/LOSS.


There is surely the possibility of using those sub-shops of amazon which also support F/LOSS, but those sub-shops will still use amazon’s infrastructure and “huma ressources”, which ain’t what I want.

But I must admit that I am not sure how the workers at libri/bookzilla are treated.

This .travis.yml actually shows how easy CI can be — on simple packages, nothing like (Node)[].

  - curl | sudo sh -
  - cargo build --verbose
  - cargo test --verbose
  - rustdoc --test src/ -L target
  - LD_LIBRARY_PATH=/usr/local/lib


Missing Computing Device Manufaktur

Goldene Herrentaschenuhr Nr. 1613

I’m not really sad, but I wonder: Where are the people and companies caring about decent computing hardware. I mean shiny, stylish laptops, solidly build, reliable, durable, and maybe not beef, but tofu.

We know that one … who was it … vendor. But that can not be everything. Many others make devices, but they lack it - the bit which makes a device perfect, different, fitting, the thing you want.

Can it be that hard?

The mobile world is showing that smalld evices can also be done by small “Manufakturen”. Geekpshone, Jolla, and Fairphone are just three examples of small vendors which build decent hardware.

The Anaconda installer is a piece of software used to install Fedora, RHEL and their derivatives. Since the installation is internally a complicated process and there are many aspects of the resulting system that need to be configured during installation it has been decided that the Anaconda installer needs to support dynamically loaded plugins (addons) that will be developed and maintained by other teams and people specialized in certain areas. This guide should help those people writting an Anaconda addon to get an insight to the architecture and basic principles used by the Anaconda installer as well as an overview of the API and helper functions provided.

Ever wondered how to write addon’s for anaconda? Vratislav wrote it down for all of us!

Weekly oVirt Engine Virtual Appliance builds

Światło na końcu tunelu / Light at the end of the tunnel

Finally there are weekly Fedora 19 based oVirt Engine Appliance builds.

They can be found in oVirt’s jenkins instance.

If you want to use Copr repos, then you want to use dnf as well

It has never been easier to use a Copr repository with the Copr plugin for dnf.

To enable a copr repo on your local host you just need to run:

dnf copr enable bkabrda/python-3.4 fedora-20-x86_64

And if you ain’t sure what repo to enable just try

dnf copr search rust

Once the repository is enabled, it is also available to the other yum/rpm based tools, like yum and pkcon.

Caching large objects and repos with Squid - Easy huh?


… yes it is. But only if you take care that the §$%&/() maximum_object_size directive appears above/before the cache_dir directive.

If you remember this, then Matt’s »Lazy distro mirrors with squid« tutorial is a great thing to lazily cache repos.

Personally I took a slightly different approach. I edited /etc/hosts to let point to my proxy, and configured squid as a reverse proxy.


# Let the local proxy accelerate access to download.fp.o
http_port 80 accel no-vhost
# Tell squid where the origin is
cache_peer parent 80 0 no-query originserver name=myAccel

maximum_object_size 5 GB
cache_dir ufs /var/spool/squid 20000 16 256


# Caching of rpms and isos 
refresh_pattern -i .rpm$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .iso$ 129600 100% 129600 refresh-ims override-expire

Squid can be easily installed using:

pkcon install squid

Mozilla’s precompiled Rust for Fedora


It is still not easy to package rust for Fedora in the intended way, which includes using Fedora’s llvm and libuv.

A much easier way which I now chose is to use the official rust binaries and wrap them in an rpm. This can then be build in Copr.

The rust-binary package includes the official release. The same method can also be used to create a rust-nightly-binary which could deliver the precompiled rust nightlies.

Now it’s easy to enjoy rust on Fedora - especially with the lately discovered Rust By Example.

To get started you just need to run:

# We are using dnf's copr plugin, because it is - easy!
$ pkcon install dnf dnf-plugins-core

# Enable copr repo
$ sudo dnf copr enable fabiand/rust-binary

# Install rust-binary
$ pkcon refresh
$ pkcon install rust-binary

$ rustc --version
rustc 0.11.0 (aa1163b92de7717eb7c5eba002b4012e0574a7fe 2014-06-27 12:50:16 -0700)

Please note that the rpm only includes rustc and rustdoc, not cargo, rust’s upcoming package manager.

Rust is a programming language with a focus ontype safety, memory safety, concurrency and performance.

This site seems to be a nice walk-through through some of Rust’s aspects.

html5 webrtc sites and are two places to communicate.

“[…] there are a few findings that stand out: Build frequency and developer (in)experience don’t affect failure rates, most build errors are dependency-related, […]”

Taking a look at the rootfs footprint of a LiveCD and a disk (image)

Archive: First Footprint on the Moon (NASA, Marshall, 07/69)

Besides the details about Node’s memory footprint, it was also intersting to see how much space we gain form the rootfs by our minimization efforts.

The idea is to take a recent oVirt Node image, and compare some stats of it, to the stats of a regular image, build using the @core group.

The two points I investigated are:

  • How does the minimization affect the number of packages?
  • How does the minimization affect the space requirements?


  • first one was addressed by counting the number of installed packages (rpm)
  • the second one was addressed by
    • Summing up the reported size of the installed packages (rpm)
    • Determining the disk space in use (df)

Because guestfish’ing into a LiveCD is tiresome, I created this script to gather the stats for me. Additionally it’s nice to have a tool by hand to create reproducable results.

The results are - once again - interesting.

$ bash ovirt-node-iso-scratch.iso runtime-layout.img
Image: ovirt-node-iso-scratch.iso
ImageSize: 197.00 MB
  NumPkgs:    449
  SizePkgs:   867.50 MB
  SizeRootfs: 564.97 MB

Image: runtime-layout.img
ImageSize: 1543.63 MB
  NumPkgs:    490
  SizePkgs:   1071.67 MB
  SizeRootfs: 1323.57 MB

DiffNum: -41
DiffSizePkgs: -204.17
DiffSizeRootfs: -758.60

What do we see here?

  • We see that the plain rootfs has 41 additional packages installed. Or the other way round: 41 packages are blacklisted on the LiveCD.
  • The sum of reported rpm sizes is ~200 MB higher on the plain rootfs than on the livecd
  • The disk usage on the plain rootfs is ~760 MB higher than on the livecd

Especially the last two points indicate that the file based blacklisting is actually responsible for freeing up so much space.

This is just a rough estimate. More time needs to be spend on investigating the details of this differences.