Booting a VM of an iSCSI LUN - inside a Kubernetes cluster

Create some entities in Kubernetes:

# Create the pod, services, persistent volumes, and claims
$ kubectl create -f iscsi-demo-target-tgtd.yaml
persistentvolumeclaim "disk-custom" created
persistentvolumeclaim "disk-alpine" created
persistentvolumeclaim "disk-cirros" created
persistentvolume "iscsi-disk-custom" created
persistentvolume "iscsi-disk-alpine" created
persistentvolume "iscsi-disk-cirros" created
service "iscsi-demo-target" created
pod "iscsi-demo-target-tgtd" created

Now use a pod to access the created target:

# Run a qemu instance to see if the target can be used
# Note: This is not testing the PV or PVC, just the service and     target
# Use ctrl-a c quit to quit
$ kubectl run --rm -it qemu-test --image=kubevirt/libvirtd -- \
  qemu-system-x86_64 \
    -snapshot \
    -drive file=iscsi://iscsi-demo-target/iqn.2017-01.io.kubevirt:sn.42/2 \
    -nographic

And enjoy the boot:

ISOLINUX 6.04 6.04-pre1  Copyright (C) 1994-2015 H. Peter Anvin et al
boot: 


OpenRC 0.21.7.818fc79999 is starting up Linux 4.4.45-0-virtgrsec (x86_64)
…
Welcome to Alpine Linux 3.5
Kernel 4.4.45-0-virtgrsec on an x86_64 (/dev/ttyS0)

localhost login:

Okay - It’s not all end-to-end yet - when looking at it from a KubeVirt perspective.

What you see is that a qemu instance is booting of an iSCSI target LUN, offered by an iSCSI portal running as an unprivileged pod on a Kubernetes cluster.

But that’s already nice, what was achieved so far:

  • iSCSI target pod with demo content
  • Service, volumes, claims to expose the content
  • qemu can boot from the LUN
  • VM object can be created to boot of the LUN (if you specify the LUN)

The remaining gap (which is being worked on) is to allow specifying a claim in the VM object (instead of the LUN as we did it above). And then we can use KubeVirt to boot a VM of a disk image - probably.

Containerized iSCSI Demo target

If you want to boot a VM of a disk, then you obviously need a disk.

And as I’m currently playing with launching VMs of iSCSI target’s I took this as an excercise to come up with a nice setup to easily deploy iSCSI targets on a Kubernetes cluster to provide demo images for booting VMs.

Initially I went with writing up a small docker image to add LIO iSCSI targets. But this solution was not really portable (i.e. not running on minikube), because it requires Kernel support (LIO is an in-Kernel target).

But luckily there are some other iSCSI target implementations which provide user-space iSCSI targets - like tgtd. Alpine - which I initially used - did not offer tgtd, thus I switched over to use Debian, which provides - at least it feels like it - every single project which exists since the beginning of unix timestamp 0.

Anyhow, long story short. With tgtd it was easy to create an image which contains Alpine and CirrOS as demo images, and export them in a tgt target.

The result is now a simple docker image which you can launch, and if you export the right port, then you got a ready to use iSCSI target which serves Alpine, CirrOS, and an empty LUN. All of those are inside the contianer, thus the data is not persisted.

Feel free to try it out:

$ docker run \
  -p 3260:3260 \
  -it fabiand/iscsi-demo-target-tgtd


# To test:
# In another terminal
$ qemu-system-x86_64 \
  -snapshot \
  -serial stdio \
  -drive file=iscsi://127.0.0.1/iqn.2017-01.io.kubevirt:sn.42/2

# Or just to discover
$ iscsiadm --mode discovery -t sendtargets --portal 127.0.0.1

Power of <stroke>love</stroke> containers - by acknowledging and respecting their boundaries

So - It’s about guarantees. In this post at least. Containers are so useful - in some cases - because of a few reasons. One reason tho is that containers are pretty much “independent” of the host and the host operating system. And making them indepent of the host - or - abstracting away to host solves a class of problems.

Warkworth Castle

The author of an application does not need to take care that the OS and it’s devices come up correctly. Containers allow us to just assume that they will. An author does not even need to care about the real devices. An application needs a network? In containers you can work with that assumption. In the past time and code lines was spent on just checking that networking was availableDocker guarantees this. And even the NIC name.

An obviously docker goes beyond NICs. They found a great set of primitives and guarantees which they provide on any host to the container. Well - It only works because the distributions moved a little to provide all Kernel features which docker requires - but in the end it effectively means that the docker runtime (and OCI) guarantee you the same environment regardless of the host. And this simplifies things. Things? Yes. It simplifies code, because you can work with assumptions, it also simplifies updates, because you are isolated. And there are probably many more benefits.

Thus to me the best way to benefit from containers is to respect them and their boundaries. Boundaries in two ways actually: On the one hand that the container can expect that the boundary is always the same, and on the other hand that the container is not crossing the boundary (and thus avoids touching anything outside of it’s bnoundaries, like i.e. the host).

For sure there are exceptions where containers need to cross the boundary, i.e. in KubeVirt, where access to /dev/kvm is needed. But each of those escapes can lead to potential other problems. Search path and tool availability differences between container and the outer world, authentication issues because of different UIDs/GIDs in an outside of the container, security issues if stuff is copied outside of the container into external places, and so on and so forth.

Why am I iterating over this? Well, even if there are ways in the container world which allow us to break out, then we should still be very careful with using them. We loose the benefits of portability and isolation the more we open up the boundaries.

For KubeVirt - for example - we know that we need to cross the boundaries in some cases, i.e. to access /dev/kvm - but OTOH we try to not cross them, by not relying on software (i.e. libvirtd) or other devices (like disks) on the host. We rather aim at meeting these requirements in container or Kuebrnetes native ways (containerized libvirtd and persistent volumes in this example).

How to run a virtual machine on Kubernetes using KubeVirt

The basic steps we take are:

  1. Build and run the demo VM image
  2. Play with KubeVirt

The purpose of this walk through is just to allow you to get a feeling of how to work with KubeVirt. Once you are done, feel free to contribute or try out the developer setup which is based on vagrant.

If you want to be on the safe side, then please run this demo on Fedora 25.

Let’s start by pulling down the KubeVirt Demo repository:

$ git clone https://github.com/kubevirt/demo.git

How this demo works, is by building a disk image, which is then booted by qemu. The benefits are that your host is kept isolated and will not be affected by the demo.

So, let’s build the VM disk image for qemu:

$ cd demo
$ make build

The build can now actually take a while, depending on your internet connection. Once the image was built, we can run it:

$ ./run-demo.sh

Note that you can now run this command again and again, without the need to rebuild the image each time.

Now CentOS 7 is booting up. You’ll first be greeted by a grub prompt, and end up in a login prompt. You’ll need to login as root without a password:

CentOS Linux 7 (Core)
Kernel 3.10.0-514.el7.x86_64 on an x86_64

Login as 'root' to proceed.

kubevirt-demo login: root
Last login: Thu Feb  2 12:18:05 on ttyS0
[root@kubevirt-demo ~]#

Well done, you are logged into the VM with a hopefully running Kubernetes cluster which also contains KubeVirt.

Let’s start to play with KubeVirt. KubeVirt is implemented as an add-on to Kubernetes using TPR (ThirdPartyResources) and custom controllers. The usage of TPR allows us to reuse Kubernetes’ API, thus we can use the usual kubectl command to control KubeVirt.

Let’s check that the cluster is up and running:

[root@kubevirt-demo ~]# kubectl get pods
NAME                 READY     STATUS    RESTARTS   AGE
haproxy              1/1       Running   33         16d
libvirtd-90ilw       1/1       Running   10         16d
virt-api             1/1       Running   12         16d
virt-controller      1/1       Running   12         16d
virt-handler-24yw6   1/1       Running   49         16d
[root@kubevirt-demo ~]# 

All good - all pods are running.

Let’s check if there is already any VM defined:

[root@kubevirt-demo ~]# kubectl get vms
No resources found.
[root@kubevirt-demo ~]# 

No - That’s okay. Now, let’s create one, luckily there is a pre-defined one in /vm.json:

[root@kubevirt-demo ~]# cat /vm.json 
{
   "metadata": {
     "name": "testvm"
   },
   "apiVersion": "kubevirt.io/v1alpha1",
   "kind": "VM",
   "spec": {
        "nodeSelector": {"kubernetes.io/hostname":"kubevirt-demo"},
        "domain": {
          "devices": {
            "interfaces": [
              {
                "source": {
                  "network": "default"
                },
                "type": "network"
              }
            ]
          },
          "memory": {
            "unit": "KiB",
            "value": 8192
          },
          "os": {
            "type": {
              "os": "hvm"
            }
          },
          "type": "qemu"
        }
   }
}
[root@kubevirt-demo ~]# kubectl create -f /vm.json 
vm "testvm" created
[root@kubevirt-demo ~]# kubectl get vms
NAME      KIND
testvm    VM.v1alpha1.kubevirt.io

But how do we know that it’s really running? This can be done by speaking to libvirt and learn about the created domain:

[root@kubevirt-demo ~]# virsh list
 Id    Name                           State
----------------------------------------------------
 2     testvm                         running

Nice - It’s there. What can we do with it?

Currently not much - you can actually access it using spice and you can stop it again.

To complete the demo we will now be shutting the VM down:

[root@kubevirt-demo ~]# kubectl get vms
NAME      KIND
testvm    VM.v1alpha1.kubevirt.io
[root@kubevirt-demo ~]# kubectl delete vms testvm
vm "testvm" deleted
[root@kubevirt-demo ~]# kubectl get vms
No resources found.
[root@kubevirt-demo ~]# virsh list
 Id    Name                           State
----------------------------------------------------

[root@kubevirt-demo ~]# 

Congratulations - You just created and delete a VM using KubeVirt via the Kubernetes API.

If you had issues or want to provide feedback then reach out to us using https://github.com/kubevirt/demo/issues.

Now we need to get round to add disks and network to add some psychedelic colors to the demo.

Want to know about KubeVirt - take a look at this slide deck which was presented at devconf.cz 2017.

Hello KubeVirt

Previously I’ve been looking a little into how VMs could be run on a cluster manager like Kubernetes.

And the previous post was already pretty specific about the design of such a solution.

KubeVirt is a project implementing this approach. Thus running virtual machines on top of Kubernetes by using Kubernetes TPRs and custom controllers and daemons.

We’ve been actually working on it for a while, and are finally in a shape were there is a (hopefully) easy to use demo. Just give the following command a try on your Fedora 25 machine:

bash $ curl run.kubevirt.io/demo.sh | bash

This will normally not wreck your host, but instead download a virtual machine and deploy Kubernetes and KubeVirt in it. Afterwards you can easily access it to play around.

asciicast

Feel free to browse our code, documentation, and designs at https://github.com/kubevirt/kubevirt. Or provide fixes to the demo at https://github.com/kubevirt/demo.

We will also be giving two talks at

In addition there is also a small day-long KubeVirt gathering at DevConf.cz 2017.

Ever wondered about the oVirt Engine Appliance dependencies.

This is a nice chart - the different colors of the node encode the size of the package (green lt 10MB, yellow lt 40MB, red gt 40MB).

Source

Now it’s time to get a scissor and trim this dependency tree.

One way to represent and handle Virtual Machines in Kubernetes.

Cardstock model 1

Extending Kubernetes to understand handling Virtual Machines. But how?

In the previous post it became obvious that VMs are sometimes used in Kubernetse, but they can not be fine tuned, because they are often used in a way which is transparent to the user, thus a user does not gain any direct access to the VM. To gain direct access to all VM properties we thus need to explicitly represent VMs inside Kubernetes.

So, what can be done? We need to define a VM type in Kubernetes. Once the type is there, we have the ability to fine define all relevant properties. But how can this be done? Kubernetes has support for so called 3rd party resources (TPR). But let’s take a step back to have a broader context to understand what they are.

In general, Kubernetes works in a declarative and reactive way. A user creates objects of a specific kind through the Kubernetes REST API. There are controllers inside the cluster which are responsible for each and every type supported by Kubernetes. Once a controller sees a new instance of a specific type, he reacts and performs the necessary steps to bring such an object to life. For example, if a user posts a pod specification to the API server, a controller will see this new specification and get the pod scheduled on a host, where the kubelet is then instanciating this pod.

Thus: For every type which is known to Kubernetes there is a controller responsible for handling it.

TPRs are now a way to declare additional types in the Kubernetes API. After you declared such a new type, a user can post objects of this type to the Kuebrnetes API, Kubernetes will then store them as any other object. They can actually be manipulated like any other object. (In reality there are afew bugs and limitations).

Thus we could easily use a TPR to declare a VM type within Kubernetes. And the Kubernetes REST API can be used to modify objects of this type.

The issue is that Kubernetes does store this type, but there is no controller in the cluster or daemon on a node which knows how to handle these objects.

So the second thing we need to do, is to come up with controllers and daemons to provide the cluster and node wide virtualzation logic to Kubernetes. They can ideally be shipped as containers - which will allow us to directly leverage existing Kubernetes functionality like DaemonSets or ReplicaSets.

That’s the high-level picture. In a picture:

A long time ago in a Kubernetes far, far away …

            User
              |
              v
+-----------------------------------------+
| API Server                              |
+ - - - - - - - - - - - - - - - - - - - - +
| [RC Foo]                 [VM Bar]       |
+----A-----------------------A------------+
     |                       |
     | watching for RCs      | watching for VMs
     |                       |
+---------------+        +-----------------+
| rc-controller |        | virt-controller | 🠘 NEW
+---------------+        +-----------------+

But what do we gain? Up to now gain a little. For example we would inherit the deployment features of the cluster. Instead of having our own oVirt logic to turn hosts into cluster nodes, we inherit Kuebrnetes functionality on that front. By shipping the logic for controllers and daemons in containers, we gain functionality for delivery for free (DaemonSets will ensure that a daemon is always running on all hosts in the cluster). Also some kind of failover (ReplicaSets). We also get a communication channel between them for free (network). Also the datastore for VM specifications (API Server). Even more challenging, but by putting our stuff into containers, we isolate ourselfs from the hosts - to some degree. The ramining host specific bits can then hopefully pushed into dedicated places - to gain more OS independence - so it does not matter if we are running on CentOS, Fedora, Atomic, Alpine, or Ubuntu.

This all looks to good, yes. There are drawbacks, i.e. we need to adopt our software to play well with Kubernetes. And we will need to to adopt the declarative and reactive patterns.

And it’s also tricky in the detail. We would accept that the kubelet is the designated node level resource manager. This is tricky in virtualization, as there might be conflicts between what the kubelet is planning and virtualization side’s needs. However, these problems - that there might be conflicts between what a workload wants and what the kubelet wants - are not virtulization specific. Therefor I’m optimistic that there will be ways of cooperating with the kubelet to solve these kind of conflicts.

Another thing is that containers are usually - well - contained. And our daemons will need access to physical hardware, for example to do device passthrough.

A few steps forward, but also a few back …

Virtual Machines in Kubernetes? How and what makes sense?

Happy new year.

Rolls of Hay

I stopped by saying that Kubernetes can run containers on a cluster. This implies that it can perform some cluster operations (i.e. scheduling). And the question is if the cluster logic plus some virtualization logic can actually provide us virtualization functionality as we know it from oVirt.

Can it?

Maybe. At least there are a few approaches which already tried to run VMs within or on-top of Kubernetes.

Note. I’m happy to get input on clarifications for the following implementations.

Hyper created a fork to launch the container runtimes inside a VM:

docker-proxy
          |
          v
[VM | docker-runtime]
          |
          + container
          + container
          :

runV is also from hyper. It is a OCI compatible container runtime. But instead of launching a container, this runtime will really launch a VM (libvirtd, qemu, …) with a given kernel, initrd and a given docker (or OCI) image.

This is pretty straight forward, thanks to the OCI standard.

frakti is actually a component implementing Kubernetes CRI (container runtime interface), and it can be used to run VM-isolated-containers in Kubernetes by using Hyper above.

rkt is actually a container runtime, but it supports to be run inside of KVM. To me this looks similar to runv, as a a VM is used for isolation purpose around a pod (not a single container).

  host OS
    └─ rkt
      └─ hypervisor
        └─ kernel
          └─ systemd
            └─ chroot
              └─ user-app1

ClearContainers seem to be also much like runv and the alternative stage1 for rkt.

RancherVM is using a different approach - The VM is run inside the contianer, instead of wrapping it (like the approaches above). This means the container contains the VM runtime (qemu, libirtd, …). The VM can actually be directly adressed, because it’s an explicit component.

  host OS
    └─ docker
      └─ container
      └─ VM

This brings me to the wrap-up. Most of the solutions above use VMs as an isolation mechanism to containers. This happens transparently - as far as I can tell the VM is not directly exposed to higher levels, an dcan thus not be directly adressed in the sense of configured (i.e. adding a second display).

Except for the RancherVM solution where the VM is running inside a container. Her ethe VM is layered on-top, and is basically not hidden in the stack. By default the VM is inheriting stuff form the pod (i.e. networking, which is pretty incely solved), but it would also allow to do more with the VM.

So what is the take away? So, so, I would say. Looks like there is at least interest to somehow get VMs working for the one or the other use-case in the Kubernetes context. In most cases the Vm was hidden in the stack - this currently prevents to directly access and modify the VM, and it actually could imply that the VM is handled like a pod. Which actually means that the assumptions you have on a container will also apply to the VM. I.e. it’s stateless, it can be killed, and reinstantiated. (This statement is pretty rough and hides a lot of details).

VM The issue is that we do care about VMs in oVirt, and that we love modifying them - like adding a second display, migrating them, tuning boot order and other fancy stuff. RancherVM looks to be going into a direction where we could tnue, but the others don’t seem to help here.

Cluster Another question is: All the implementations above cared about running a VM, but oVirt is also caring about more, it’s caring about cluster tasks - i.e. live migration, host fencing. And if the cluster tasks are on Kubernetes shoulders, then the question is: Does Kubernetes care about them as much as oVirt does? Maybe.

Conceptually Where do VMs belong? Above implementations hide the VM details (except RancherVM) - one reaosn is that Kubernetes does not care about this. Kubernetes does not have a concept for VMs- not for isolation and not as an explicit entity. And the questoin is: Should Kubernetes care? Kubernetes is great on Containers - and VMs (in the oVirt sense) are so much more. Is it worth to push all the needed knowledge into Kubernetes? And would this actually see acceptance from Kubernetes itself?

I tend to say No. The strength of Kubernetes is that it does one thing, and it does it well. Why should it get so bloated to expose all VM details?

But maybe it can learn to run VMs, and knows enough about them, to provifde a mechanism to pass through additional configuration to fine tune a VM.

Many open questions. But also a little more knowledge - and a post that got a little long.

Generic Cluster Management + Virtualization Flavor

oVirt is managing a cluster of machines, which form the infrastructure to run virtual machines on top.

Yes - That’s true. We can even formulate this - without any form of exaggeration and you can probably even find a proof for this - mathematically:

  Generic Cluster Knowledge
+ Virtualization Specific Cluster Knowledge
--------------------------------------------------------
  Absolutely Complete Virtualization Management Solution

You might disagree with this view, that’s fine - it is just one of many views on this topic. But for the sake of discussion, let’s take this view.

Add maths

What I consider to be generic cluster knowledge is stuff like:

  • Host maintenance mode
  • Fencing
  • To some degree even scheduling
  • Upgrading a cluster
  • Deploying a cluster (i.e. the node lifecycle, like joining a cluster)

Besides of that even broader topics are not specific to virtualization like i.e. storage - regardless of what is running on a cluster - you do need to provide storage to it, or at least run it of some storage (don’t pull out PXE now …). The same is true for networking - Workloads on a cluster are usually not isolated, and thus need a way to communicate.

And then there are workload specific bits, i.e. in oVirt it is all about virtualization:

  • Specific metrics for scheduling
  • Logic to create VMs (busses, devices, look at a domxml)
  • Different scheduling strategies
  • Hotplugging
  • Live Migration
  • Specifics on network on storage related to virtualization
  • Host device passthrough

… to name just a few. These (and many more) form the virtualization specific knowledge in oVirt.

So why is it so important to me to separate the logic contained in oVirt in this particular way? Well - oVirt is interesting to people who want to manage VMs (on a data center scale and reliability level). This is pretty specific. And it’s all tightly integrated inside of oVirt. Which is good on the one hand, because we can tune it at any level towards our specific use-case. The drawback is that we need to write every level in this stack mostly by ourselves.

Wit this separation at hand, we can see that this kind of generic cluster functionality might be found in other cluster managers as well (maybe not exactly, but to a some degree). If such a cluster manager exists, then we could look and see if it makes sense to share functionality, and then - to tune it towards our use-case - just add our flavor.

Any flavor you like V5.0

“Yes, but …”

Yes, so true - but let’s continue for now.

A sharp look into the sea of technology reveals a few cluster managers. One of them is Kubernetes (which is also available on Fedora and CentOS).

It describes itself as:

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, providing container-centric infrastructure.

Yes - the container word was mentioned. Let’s not go into it this right now, but instead, let’s continue.

After looking a bit into Kubernetes it looks like there are areas - the generic bits - in which there is an overlap between Kubernetes and oVirt.

Yes - There are also gaps, granted, but that is acceptable, as we always start with gaps and find ways to master them. And sometimes you are told to mind the gap - but that’s something else.

Getting back to the topic - If we now consider VMs to be just yet another workload, and that VM management is actually just an (in oVirt a Java) application (excluding the exceptions), then the gap might not be that large anymore.

… until you get to the exceptions - and the details. But that is something for next year.

qemu-install - Ha!

$ qemu-install() { virt-install --print-xml $@ | sudo -E virsh domxml-to-native qemu-argv /dev/stdin ; }

But libvirtd does more than this, so take care and don’t expect to much.

Advent and adventures

O' Tannenbaum

(I think I used this picture before, and it’s still nice).

Happy third advent.

oVirt Node - some time has passed since it’s 4.0 release. We are actually close to finish 4.1. So far Node 4.0 (which is a redesign and is based on LVM to allow atomic rollbacks and a customizable file-system at the same time - and image based updates) has turned out to be quite stable at runtime. The bigger bugs which turned up, were in the areas where we expected them - the affected areas are points where we diverge from the stock operating system: kernel/initd location (some tools don’t work with our kernel/initrd locations), updates (you can update an image to itself), and image size (just generally slow). In oVirt 4.1 (and even in 4.0.z) we are still working to fix each of those issues one by one. We don’t want to rush, because we want to find the right solution.

IMG_1050.jpg

(We are stuck in a hamster wheel, but we do make progress)

On the long run we mainly got stabilization on our plate, and smaller improvements. A larger upcoming change is to allow an anaconda installclass to define installation constraints. We need this to enforce a specific partitioning layout. Let’s see if we can come up with something reasonable for upstream.

At the bottom line this logical rebase of Node onto fresh Fedora/CentOS technologies seems to have paied out for now. It reduced the number of bugs we had - especially around boot, hardware support, and persistence - it also reduced the amount of work we had to do on our administration UI - because we now leverage Cockpit (well - we actually need to find some time here to fix osme bugs in Cockpit which we uncovered when our flows got tested). The rebase allows us to share more code with other communities. Both of this is good to me: share and other communities.

Could this model actually be something we could apply to more of oVirt? oVirt is managing your datacenter (If not, then you want to try it ;) ) - To me the question is: Where do we have the opportunity to share more code with other communities?

Actually - We already do this to some degree. Like right now - heros are working on finally bringing NetworkManager support to vdsm. It’s crucial to have a nice Cockpit integration. But it’s also a lot of work. oVirt has grown for years from a closed .Net project to an open-source Java and Python project (Yes, there are omre languages involved) with a pretty broad user base (from what I can tell).

Hero

(A hero)

Without risking the stability of the project it’s difficult to share code with other projects, because sharing code brings it’s own requirements i.e. schedule alignment and (obviously) our integration with this other (and always changing) project. Like for vdsm above: It was and is not easy to finally integrate NetworkManager into vdsm - but I’m confident that it will pay out. It actually already does - because of our integration with NetworkManager, bugs were found in Cockpit and NetworkManager. We don’t benefit from this directly - because for us it just means that we finally reach feature parity with our NetworkManager integration- but all users of NetworkManager benefit. It’s just harder to see, because it’s an indirect benefit. And in future we hopefully benefit by inheriting bug fixes and more features of NetworkManager.

So - It’s not easy to share and collaborate, but is it easier to build everything ouselves?

What can we - oVirt - share with others? Are there elements to share which are useful for us and would be useful for others too? Or: Are there already things out there which we could leverage more and collaborate with others? We’ll need to see, we need to look to see what we use in our own backyard, and we need to look to see what there is in the neighbourhood.

cluster

(This is a cluster, a cluster of lamps)

oVirt is managing a cluster of machines, which form the infrastructure to run virtual machines on top.

Let’s start with this high-level view - and a couple of Spekulatius.

Marie to Me

Fedora 25 and rust

:

$ rustc -o hello <(echo ‘fn main() { println!(“Hello World!”) }’)
$ ./hello Hello World!

So - What am I going to do with this?

4.0, automation, and upstream

endless.

oVirt 4.0 is out for a while. The Node team has been busy with understanding a few - really, justa few - bugs around Node Next.

But at the bottom line we achieved what we aimed at: Lowering the day-to-day load on our small team.

Using anaconda, cockpit, and LVM still looks promising.

Currently we are a little busy on working on implementing 4.1 features (3rd party rpm persistence and miproved installation errors) as well as improving the automation even more.

And beyond that? Well, I think documentation is the part that will slowly see updates. We also need to improve our Jenkins CI. With one step at a time we can move forwards.

And beyond that? Well, stabilization is the key here. Having a robust CI helps to support this.

And beyond that? Well, upstream. Finally I think we can shift of fixing stuff in upstream. In the previous years we were concerned about our bugs in Node, and really had no room to fix them in upstream, because there was none. Now, that we are really based on upstream again, we do have the ability to contribute there.

So, after the hype on 4.0 we are slowly drifting into a day-to-day maintenance again, but we don’t drown in bugs, and that allows us to improve and work upstream.