This .travis.yml actually shows how easy CI can be — on simple packages, nothing like (Node)[http://www.ovirt.org/Node].

install:
  - curl http://www.rust-lang.org/rustup.sh | sudo sh -
script:
  - cargo build --verbose
  - cargo test --verbose
  - rustdoc --test src/lib.rs -L target
env:
  - LD_LIBRARY_PATH=/usr/local/lib

Source: https://github.com/alexcrichton/rust-compress/blob/master/.travis.yml

Missing Computing Device Manufaktur

Goldene Herrentaschenuhr Nr. 1613

I’m not really sad, but I wonder: Where are the people and companies caring about decent computing hardware. I mean shiny, stylish laptops, solidly build, reliable, durable, and maybe not beef, but tofu.

We know that one … who was it … vendor. But that can not be everything. Many others make devices, but they lack it - the bit which makes a device perfect, different, fitting, the thing you want.

Can it be that hard?

The mobile world is showing that smalld evices can also be done by small “Manufakturen”. Geekpshone, Jolla, and Fairphone are just three examples of small vendors which build decent hardware.

The Anaconda installer is a piece of software used to install Fedora, RHEL and their derivatives. Since the installation is internally a complicated process and there are many aspects of the resulting system that need to be configured during installation it has been decided that the Anaconda installer needs to support dynamically loaded plugins (addons) that will be developed and maintained by other teams and people specialized in certain areas. This guide should help those people writting an Anaconda addon to get an insight to the architecture and basic principles used by the Anaconda installer as well as an overview of the API and helper functions provided.

Ever wondered how to write addon’s for anaconda? Vratislav wrote it down for all of us!

Weekly oVirt Engine Virtual Appliance builds

Światło na końcu tunelu / Light at the end of the tunnel

Finally there are weekly Fedora 19 based oVirt Engine Appliance builds.

They can be found in oVirt’s jenkins instance.

If you want to use Copr repos, then you want to use dnf as well

It has never been easier to use a Copr repository with the Copr plugin for dnf.

To enable a copr repo on your local host you just need to run:

dnf copr enable bkabrda/python-3.4 fedora-20-x86_64

And if you ain’t sure what repo to enable just try

dnf copr search rust

Once the repository is enabled, it is also available to the other yum/rpm based tools, like yum and pkcon.

Caching large objects and repos with Squid - Easy huh?

Squid

… yes it is. But only if you take care that the §$%&/() maximum_object_size directive appears above/before the cache_dir directive.

If you remember this, then Matt’s »Lazy distro mirrors with squid« tutorial is a great thing to lazily cache repos.

Personally I took a slightly different approach. I edited /etc/hosts to let download.fedoraproject.org point to my proxy, and configured squid as a reverse proxy.

…

# Let the local proxy accelerate access to download.fp.o
http_port 80 accel defaultsite=download.fedoraproject.org no-vhost
# Tell squid where the origin is
cache_peer download.fedoraproject.org parent 80 0 no-query originserver name=myAccel

# REMEMBER the ORDER
maximum_object_size 5 GB
cache_dir ufs /var/spool/squid 20000 16 256

…

# Caching of rpms and isos 
refresh_pattern -i .rpm$ 129600 100% 129600 refresh-ims override-expire
refresh_pattern -i .iso$ 129600 100% 129600 refresh-ims override-expire

Squid can be easily installed using:

pkcon install squid

Mozilla’s precompiled Rust for Fedora

Rust

It is still not easy to package rust for Fedora in the intended way, which includes using Fedora’s llvm and libuv.

A much easier way which I now chose is to use the official rust binaries and wrap them in an rpm. This can then be build in Copr.

The rust-binary package includes the official release. The same method can also be used to create a rust-nightly-binary which could deliver the precompiled rust nightlies.

Now it’s easy to enjoy rust on Fedora - especially with the lately discovered Rust By Example.

To get started you just need to run:

# We are using dnf's copr plugin, because it is - easy!
$ pkcon install dnf dnf-plugins-core

# Enable copr repo
$ sudo dnf copr enable fabiand/rust-binary

# Install rust-binary
$ pkcon refresh
$ pkcon install rust-binary

$ rustc --version
rustc 0.11.0 (aa1163b92de7717eb7c5eba002b4012e0574a7fe 2014-06-27 12:50:16 -0700)

Please note that the rpm only includes rustc and rustdoc, not cargo, rust’s upcoming package manager.

Rust is a programming language with a focus ontype safety, memory safety, concurrency and performance.

This site seems to be a nice walk-through through some of Rust’s aspects.

html5 webrtc sites

http://palava.tv and http://talky.io are two places to communicate.

“[…] there are a few findings that stand out: Build frequency and developer (in)experience don’t affect failure rates, most build errors are dependency-related, […]”

Taking a look at the rootfs footprint of a LiveCD and a disk (image)

Archive: First Footprint on the Moon (NASA, Marshall, 07/69)

Besides the details about Node’s memory footprint, it was also intersting to see how much space we gain form the rootfs by our minimization efforts.

The idea is to take a recent oVirt Node image, and compare some stats of it, to the stats of a regular image, build using the @core group.

The two points I investigated are:

  • How does the minimization affect the number of packages?
  • How does the minimization affect the space requirements?

The

  • first one was addressed by counting the number of installed packages (rpm)
  • the second one was addressed by
    • Summing up the reported size of the installed packages (rpm)
    • Determining the disk space in use (df)

Because guestfish’ing into a LiveCD is tiresome, I created this script to gather the stats for me. Additionally it’s nice to have a tool by hand to create reproducable results.

The results are - once again - interesting.

$ bash img-stats.sh ovirt-node-iso-scratch.iso runtime-layout.img
Image: ovirt-node-iso-scratch.iso
ImageSize: 197.00 MB
in_node
  NumPkgs:    449
  SizePkgs:   867.50 MB
  SizeRootfs: 564.97 MB

Image: runtime-layout.img
ImageSize: 1543.63 MB
in_imgbased
  NumPkgs:    490
  SizePkgs:   1071.67 MB
  SizeRootfs: 1323.57 MB

DiffNum: -41
DiffSizePkgs: -204.17
DiffSizeRootfs: -758.60

What do we see here?

  • We see that the plain rootfs has 41 additional packages installed. Or the other way round: 41 packages are blacklisted on the LiveCD.
  • The sum of reported rpm sizes is ~200 MB higher on the plain rootfs than on the livecd
  • The disk usage on the plain rootfs is ~760 MB higher than on the livecd

Especially the last two points indicate that the file based blacklisting is actually responsible for freeing up so much space.

This is just a rough estimate. More time needs to be spend on investigating the details of this differences.

Taking a look at the memory footprint of a LiveCD and a disk (image)

A memory

Minimization is a hot topic for oVirt Node - mainly to reduce the size of the resulting rootfs (and livecd) image.

This time the question was how large the memory footprint of Node actually is.

The method for measurement was rough:

  • Boot into the image and run free -m on the console to find out the memory usage
  • Use df -h to determine the rootfs size

LiveCD Looking at some recent ovirt-node-iso image, which has a LiveCD size of 205MB and a rootfs size of 565MB, had a memory footprint (usage) of 626MB right after booting into the installer.

plain rootfs For comparison I took an image used for imgbased. That image had a rootfs size of 1.2GB. That image actually had a memory footprint right after boot of 192MB.

The interesting bit with

  • the livecd is that mere 626MB are required to boot into the 565MB large rootfs.
  • the rootfs is that 192MB are required to boot into the 1.2GB large rootfs.

So where does the difference come from? Actually the squashfs containing the rootfs on the LiveCD needs to be extracted, before it can be booted. This is done by dracut using device-mapper. So all 565MB are pushed into RAM before the boot can continue.

So we learned that squashing the rootfs reduces the “deployment size”, but results in a larger memory overhead at runtime (at least if the squashfs is used).

The plain rootfs does not need to be uncompressed before it can be used, that is why much less memory is used at runtime. But the delivery size is much larger.

Dear Lazyweb, do you know how I can see the number of pages or size of memory claimed by the squashfs module and/or the device-mapper in memory vodoo?

Say Hello to the oVirt Engine Virtual Appliance

Virtual Appliance

One of the things on the list for oVirt 3.5 was the oVirt Virtual Appliance. Huh, what’s that? You might ask. Well, imagine a cloud image with oVirt Engine 3.5 and it’s dependencies pre-installed, and a sane default answer file for ovirt-engine-setup. All of this delivered in an OVA file. The intention is to get you a running oVirt Engine without much hassle.

Furthermore this appliance can be used in conjunction with the Self Hosted Engine feature, and the upcoming oVirt Node Hosted Engine plugin (note the Node within).

Just as a reminder to myself: Hosted Engine is a feature where a VM containing the oVirt Engine instance is managed by itself.

As you can find more informations about the oVirt Hosted Engine and oVirt Node Hosted Engine elsewhere, let me just drop a couple of words on the appliance.

The appliance is based on the Fedora 19 cloud images, with some modifications and oVirt Engine packages pre-installed. An answer file can be used as a starting point for engine-setup.

Quick Guide

Build Download the appliance yourself

# Get the sources
$ git clone git://gerrit.ovirt.org/ovirt-appliance
$ cd ovirt-appliance
$ git submodule update --init
$ cd engine-appliance

# To only build the `.raw` image use:
$ make ovirt-appliance-fedora.raw

# And run the image:
$ qemu-kvm -snapshot -m 4096 -smp 4 -hda ovirt-appliance-fedora.raw

Inside the VM:

  • Wait a bit
  • Finish the initial-setup (set a root password and optionally add a user)

and run:

$ engine-setup --config-append=ovirt-engine-answers

Building the virtual appliance

To build the appliance you need three ingredients:

  • The appliance kickstarts (kept in the ovirt-appliance repo)
  • A Fedora 19 boot.iso (or the netinstall iso)
  • lorax and pykickstart installed

The build process can then be initiated by running:

$ yum install lorax pykickstart
$ git clone git://gerrit.ovirt.org/ovirt-appliance
$ cd ovirt-appliance
$ git submodule update --init
$ cd engine-appliance

# Build the .ova
$ make

# Or: To only build the `.raw` image (without sparsification/sysprep) use:
$ make ovirt-appliance-fedora.raw

The .ova build will actually go through the following steps:

  • Create a kickstart from the provided template
  • Pass the boot iso and kickstart to livemedia-creator (part of lorax)
  • sysprep, resize, sparsify and convert the intermediate image to OVA

The .ova file now contains some metadata and the qcow2 image, to extarct the image run:

$ mkdir out ; cd out
$ tar xf ../ovirt-appliance-fedora.ova

# Run the image:
$ qemu-kvm -snapshot -m 4096 -smp 4 -hda images/*/!(*.meta)

Running the virtual appliance

Once the image is build - an image called ovirt-appliance-fedora.ova should be in your working directory - you can point hosted-engine-setup to it, which will use it for the initial VM. If you want to try the imagine with qemu (or libvirt), just use the .raw image (also available in the current workingdir) and something like:

$ qemu-kvm -snapshot -m 4096 -smp 4 -hda ovirt-appliance-fedora.raw

Once you boot into the image, the initial-setup dialog will pop-up to guide you through some initial steps.

Finishing the ovirt-engine-setup

Once you finished the initial-setup (which should be self describing), login as root and run:

$ engine-setup --config-append=ovirt-engine-answers

Comments on some design decisions

Why Fedora and why 19? Because oVirt Engine runs fine on Fedora 19. Also Fedora provides a nice set of cloud images (kickstarts) from which the oVirt Engine appliance inherits, this eases the maintenance. Fedora 20 is not used because Engine did not support it when the development of the appliance started.

Why not CentOS? We started with Fedora 19, because the cloud images where available, the plan is to either adapt them to CentOS, or look if they’ve also got cloud image kickstarts from which we could inherit.

Why initial-setup? Another reason for using Fedora 19 was, that anaconda could be leveraged to run the inital-setup. The initial-setup is responsible to ask the user some questions (what root password, what timezone, and if an additional user should be created). cloud-init could not be used, because cloud-init requires some kind of management instance at boot time (like oVirt or OpenStack) to get configured. But this isn’t the case with the virtual appliance, because the appliance will only become the Engine.

A FutureFeature could be to add another spoke to the initial-setup where the remaining questions for the engine-setup are asked, that way a user is actually guided through the setup, and does not need to manually trigger the engine-setup after login.

Less maintenance!? In general the ovirt-appliance-fedora.ks inherits from the fedora-spin-kickstarts/fedora-cloud-base.ks file. We also try hard to not diverge to much from the upstream configuration. But some modifications are applied to the final (post-ksflatten) kickstart, to change some defaults which are currently set in the fedora-cloud-base.ks.

In detail we do the following: * Don’t blacklist any package - To prevent missing dependencies * Disable text installation - This does not work with livemedia-creator * Change the partition (rootfs) size to 4GB * Generalize network activation - To be independent of nick names * Ignore missing packages - Because the cloud ks uses Fedora 20 package names * Do not explicitly set the default target * Remove disablement of initial-setup - Because we use it * Remove dummy user game - Not needed because initial-setup is used

Take a look at the Makefile for the exact informations.

Where is the UI? The appliance comes without a desktop environment. There is no hard need for it (some other host with an OS can be used to access Engine’s web-ui) and it keeps the image small.

If you want to add a desktop environment, you are free to do so, by using yum.

Next steps

This is the first shot of this appliance. Let’s see how it turns out. Some integration tests with the oVirt Node Hosted Engine plugin are pending. I expect some more cleanup and fixes, before it’s ready for the oVirt 3.5 TestDays.

Open items include:

  • Heavy testing

So feel enlighted to try out the ready to use image or build the appliance yourself. Please provide feedback and questions to the users@ovirt.org mailinglist.