Installation and development

The CI-tron project is shipped as a container image which can be run on existing infrastructure, or booted directly using Boot2container either in a virtual machine or netbooted over the internet using iPXE boot server

Security considerations

Danger

You are about to deploy a system that potentially lets people on the internet run their own code on your machines. If done improperly, this presents a serious risk for the confidentiality of your data / identity, and may be used to commit crimes.

Please read this section very carefully before moving any further.

CI-tron makes it really easy to control your machines, which means it can be very easy for bad actors to use your infrastructure to spam, scam, DDoS other web services, or use your resources to mine cryptocurrencies.

To reduce the chances of this happening, CI-tron encourages the right behavior:

  • The CI-tron gateway’s firewall drops all traffic towards the internet that is not HTTP(s), SSH, Wireguard, or SNMP;

  • Test machines should be plugged into a separate network (the private network) do not have unmediated Internet access;

  • Control equiment such as PDUs should be plugged into a separate network (the mgmt network), and not have access to the internet nor the DUTs.

            graph LR
        internet[fa:fa-globe Internet] <-->|fa:fa-ethernet eth0| gateway[fa:fa-server Testing<br>Gateway]
        gateway <--> |fa:fa-ethernet eth1 / Port 1 | switch[fa:fa-ethernet Private<br>Network Switch]
        gateway <--> |fa:fa-wifi WiFi AP| smartplug[fa:fa-wifi Smart plug]
        gateway <--> |fa:fa-usb USB cable | usbhub[fa:fa-usb USB Hub with PPPS]
    
        smartplug <--> |Power cable| dut1[fa:fa-desktop DUT 1]
        switch <--> |fa:fa-ethernet Port 2| dut1
        usbhub <--> |fa:fa-usb Port 1| dut2[fa:fa-mobile-phone DUT 2]
        

    Expected network configuration

Requirements

The CI-tron gateway container is meant to be run on the gateway machine of a CI farm, and comes with the following requirements:

  • Network: One Ethernet network interface, connected to the internet, with the following optional interfaces:

    • WiFi adapters: Will be used to create a WiFi network with the wanted SSID and passphrase, and then added to the mgmt network. This is meant for PDUs (see Adding the PDU to your CI-tron instance), or allowing local SSH access.

    • Ethernet adapters: Any extra Ethernet network interface will be added to the private network, which is used to connect test machines to the CI-tron gateway.

  • Volumes: The container requires a volume to store persistent data (mounted at /config), and optionally a temporary volume which acts as a cache across reboots (mounted at /cache).

  • Container: The container provides network services for multiple network protocols (DHCP, DNS, TFTP, HTTP, …), and thus requires to be run as a privileged container using --network host, or the following:

    • Capabilities:

      • CAP_NET_ADMIN: For binding ports < 1024

      • CAP_NET_RAW: For our DHCP server

      • CAP_AUDIT_CONTROL: For logind, to be able to log-in using SSH

    • Network:

      • --network bridge: The network the container uses to access the internet , AKA the public network;

      • podman network create -d macvlan -o parent=$PRIVATE_NIC -o mode=passthru -o no_default_route=1 --ipam-driver=none ci-tron-private-nic1 and --network ci-tron-private-nic1: The private interface you want to share to use for testing (repeat as needed).

The latest CI-tron container can be accessed using registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest, but you may want to pin it to a certain version. Until we start creating proper releases, you will find all the different versions of CI-tron in our container registry.

Deploying CI-tron

CI-tron is meant to provide a plug-and-play experience, on a dedicated machine. As such, it is best deployed as the sole application on a dedicated machine.

It can however operate with a reduced feature-set on already-existing infrastructure if needs be.

Let’s review your options!

Using a dedicated gateway

Assuming you already internet connectivity in your dedicated machine, the container should be executed in this fashion, all as root:

# FARM_NAME=myfarm

# # Create the volumes that will hold the config and caches
# podman volume create ci-tron-config
# podman volume create ci-tron-cache

# podman run --rm -ti --pull=always --privileged \
      --name ci-tron --network host --hostname $FARM_NAME-gateway \
      -v ci-tron-config:/config -v ci-tron-cache:/cache \
      registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest

Booting CI-Tron without a host operating system

If you do not have an operating system installed on your gateway, consider using Boot2container, with the following configuration expressed as an iPXE script:

#!ipxe

# Set the defaults for the gateway
set farm_name my_farm
set gateway_cache_device auto
set gateway_extra_cmdline
set gateway_img_path registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest

set default_cmdline initrd=initrd.magic net.ifnames=0 b2c.ntp_peer=auto b2c.pipefail b2c.swap=32G b2c.volume="ci-tron-cache" b2c.volume="ci-tron-config" b2c.hostname=${farm_name}-gateway b2c.run="-ti --pull=always -v ci-tron-cache:/cache -v ci-tron-config:/config -v /dev:/dev -v /lib/modules:/lib/modules -v /lib/firmware:/lib/firmware docker://${gateway_img_path}" b2c.cache_device="${gateway_cache_device}" b2c.shutdown_cmd="reboot -f" ${gateway_extra_cmdline}

kernel https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64 ${default_cmdline}
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/initramfs.linux_amd64.cpio.xz
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.depmod.cpio.xz
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.gpu.cpio
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.wifi.cpio
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.ucode.cpio  # For Intel-based machines
boot

Rather than embedding the above script in iPXE directly (see iPXE Crypto for more details), you could store it in a public git repository and then source it from another embedded script which would never need to change:

#!ipxe

set boot_script_url https://gitlab.freedesktop.org/mupuf/ci-tron-boot-cfg/-/raw/main/my_farm.ipxe

echo Welcome to the CI-tron iPXE boot script

:retry
echo Acquiring an IP
dhcp || goto retry # Keep retrying indefinitely
echo Got the IP: ${netX/ip} / ${netX/netmask}

echo

echo Updating the current time using NTP
ntp pool.ntp.org || goto retry
echo Current unixtime: ${unixtime} (use `date --date=@$$((${unixtime}))` to decode it)

echo

echo Chainloading from the iPXE server ${IPXE_SERVER_FQDN} ...
chain ${boot_script_url} || goto retry

# The above command may "succeed" but we actually fail to
# boot. This could happen if the iPXE boot file is
# successfully returned, but the URLs to the kernel and
# ramdisk are invalid, for example. In cases like these,
# continuously retry the netboot, rather than exiting iPXE and
# potentially getting stuck indefinitely in the firmware's next
# boot method. The sleep acts as a simple rate limiter.
sleep 60
goto retry

You may perform all these steps by hands, or you can rely on our iPXE boot server if you would like additional features such as automatic backups of your config volume.

Once your container has booted and shows the dashboard, you will want to add your SSH keys to /config/ssh/authorized_keys by killing the dashboard (CTRL+C) then typing nano /config/ssh/authorized_keys.

As a new service of an existing gateway

This mode deployment method breaks some of the features, but may be more convenient.

As a rootless unprivileged container

Unsupported features:

  • ❌ Physical (non-virtual) DUTs

  • ❌ Chronyd (NTP)

  • ❌ NFS / imagestore

$ # Create the volumes that will hold the config and caches
$ podman volume create ci-tron-config
$ podman volume create ci-tron-cache

$ podman run --rm -ti --pull=always                \
    --cap-add CAP_NET_ADMIN                        \
    --cap-add CAP_NET_RAW                          \
    --cap-add CAP_AUDIT_CONTROL                    \
    --device /dev/fuse                             \
    --device /dev/net/tun                          \
    --device /dev/kvm                              \
    --name ci-tron --hostname vivian-mupuf-gateway \
    -v config:/config -v cache:/cache              \
    --network bridge                               \
    -p 60022:22                                    \
    -p 8000:80                                     \
    -p 8100:8100                                   \
    registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest

Once your container has booted and shows the dashboard, you will want to add your SSH keys to the container for remote access. You may do so by mounting the config volume using podman unshare podman volume mount ci-tron-config and then editing $MOUNT_POINT/ssh/authorized_keys.

As a rootfull unprivileged container

Unsupported features:

  • ❌ Dynamic network interfaces (needed for ethernet over USB, for Fastboot devices)

  • ❌ NFS / imagestore

# # Allow ip forwarding, since we can't do that from inside the container
# # without requiring insane privileges. May not even be needed.
# echo 1 > /proc/sys/net/ipv4/ip_forward

# Create the networks
# podman network create -d macvlan -o parent=$PRIVATE_NIC -o mode=passthru -o no_default_route=1 --ipam-driver=none ci-tron-private-nic
# podman network create -d macvlan -o parent=$MGMT_NIC -o mode=passthru -o no_default_route=1 --ipam-driver=none ci-tron-mgmt-nic

# Create the volumes that will hold the config and caches
# podman volume create ci-tron-config
# podman volume create ci-tron-cache

# # TODO: add support for the USB-based PDUs
# # TODO: Find an alternative to sharing /dev/, maybe through a local
# # service that forwards consoles to SALAD using socat or similar
# podman run --rm -ti --pull=always                   \
    --cap-add CAP_NET_ADMIN                           \
    --cap-add CAP_NET_RAW                             \
    --cap-add CAP_AUDIT_CONTROL                       \
    --cap-add CAP_SYS_TIME                            \
    --device /dev/fuse                                \
    --device /dev/net/tun                             \
    --device /dev/kvm                                 \
    -v /dev:/dev                                      \
    --name ci-tron --hostname vivian-mupuf-gateway    \
    -v ci-tron-config:/config -v ci-tron-cache:/cache \
    --network bridge                                  \
    --network ci-tron-private-nic                     \
    -p 60022:22                                       \
    -p 8000:80                                        \
    -p 8100:8100                                      \
    registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest

Once your container has booted and shows the dashboard, you will want to add your SSH keys to the container for remote access. You may do so by mounting the config volume using podman unshare podman volume mount ci-tron-config and then editing $MOUNT_POINT/ssh/authorized_keys.

In vivian, for local development of CI-tron

Unsupported features:

  • ❌ Physical (non-virtual) DUTs

If you want to develop CI-tron, you could use built-in test environment: vivian.

$ cd ~/src/ci-tron
$ make vivian

You may then use $ make vivian-connect to get a shell inside the CI-tron container, that you can then use to edit /config/ssh/authorized_keys.

You may shutdown vivian by running poweroff in the shell inside the CI-tron container.

Spawning virtual DUTs

Right now, our gateway has no idea about any potential machine connected to its private network interface. Let’s boot one!

If the gateway is running in vivian, then virtual DUTs can be booted by using the vivian-dut make target. The OUTLET variable can be set to select the outlet on the VPDU:

$ make OUTLET=0 vivian-add-dut

Alternatively, you can use the dashboard to add/start a virtual DUT by selecting one with the arrow keys and submitting the DISCOVER button.

If all went well, you should now see a machine appear in the “Machines” column of the dashboard, with the state DISCOVERING. After a short while, it should transition into TRAINING.

During the TRAINING step, the machine will be booted in a loop for a set amount of times in order to test the boot reliability. After the boot loop is complete, the machine will be marked as ready for testing and its state should change to IDLE.

Your first virtual test machine is now available for testing both locally, and on your chosen Gitlab instance.

Running your first job

Now that you have at least one machine available, you may run jobs on it using the following command from inside the gateway:

$ executorctl run -t virtio:family:VIRTIO /usr/local/lib/python3.*/site-packages/valve_gfx_ci/executor/server/job_templates/interactive.yml.j2

If all went well, congratulations! You seem to have a functional setup! Most of the steps above are amenable to further configuration. You are now in a position to play around and modify defaults to your testing requirements.

Hacking on the infrastructure

After making changes to Ansible or any other component of the gateway image, it is recommended to test the changes by either re-generating the gateway image, or running the ansible playbook against an already-running vivian instance:

$ make vivian-provision

If you want to iterate faster, you may limit the ansible job to some roles by using the TAGS= parameter:

$ make TAGS=dashboard,minio,executor vivian-provision

You can get a full list of available tags by running:

$ ansible-playbook --list-tags ansible/gateway.yml

Building the container

The container image is built using rootless podman , and is provisioned using Ansible recipes (see the ansible/ subproject).

The following is a (likely incomplete) list of dependencies for a system running Debian/Ubuntu:

$ apt install buildah \
    git \
    jq \
    make \
    netcat-openbsd \
    podman \
    python \
    qemu-utils \
    qemu-system \
    skopeo \
    socat \
    wget

Note: In order to reduce round-trips to an external registry, a local registry is automatically started when running certain make targets.

The container image can be built with:

$ make gateway

Build options

  • V=1 Turn on more verbose logging messages in the build process

  • ANSIBLE_EXTRA_ARGS="-vvv ..." Pass any custom flags to ansible-playbook. Helpful for re-running only tagged roles in the ansible build, for example. You can also use this to override variables used by the ansible playbook, for example: ANSIBLE_EXTRA_ARGS="-e foo=bar"

  • IMAGE_NAME=localhost:8088/my/image The container name to tag the image with. WARNING: The image will automatically be pushed to the registry that got tagged! Defaults to localhost:8088/gfx-ci/ci-tron/gateway:latest.

Once completed, a container image will be generated, for example,

Successfully tagged localhost:8088/gfx-ci/ci-tron/gateway:latest
60cc3db9bedd2a11d8d61a4433a2a8e8daf35a59d6229b80c1fdcf9ece73b7ab

Notice that it defaults to a localhost registry. This is to save on bandwidth, since the CI-tron container is quite big.