The CI-tron project is shipped as a container image which can be run on
existing infrastructure, or booted directly using Boot2container either in a
virtual machine or netbooted over the internet using iPXE boot server
You are about to deploy a system that potentially lets people on the internet
run their own code on your machines. If done improperly, this presents a
serious risk for the confidentiality of your data / identity, and may be used
to commit crimes.
Please read this section very carefully before moving any further.
CI-tron makes it really easy to control your machines, which means it can be
very easy for bad actors to use your infrastructure to spam, scam, DDoS other
web services, or use your resources to mine cryptocurrencies.
To reduce the chances of this happening, CI-tron encourages the right behavior:
The CI-tron gateway’s firewall drops all traffic towards the internet that is
not HTTP(s), SSH, Wireguard, or SNMP;
Test machines should be plugged into a separate network (the private
network) do not have unmediated Internet access;
Control equiment such as PDUs should be plugged into a separate network (the
mgmt network), and not have access to the internet nor the DUTs.
graph LR
internet[fa:fa-globe Internet] <-->|fa:fa-ethernet eth0| gateway[fa:fa-server Testing<br>Gateway]
gateway <--> |fa:fa-ethernet eth1 / Port 1 | switch[fa:fa-ethernet Private<br>Network Switch]
gateway <--> |fa:fa-wifi WiFi AP| smartplug[fa:fa-wifi Smart plug]
gateway <--> |fa:fa-usb USB cable | usbhub[fa:fa-usb USB Hub with PPPS]
smartplug <--> |Power cable| dut1[fa:fa-desktop DUT 1]
switch <--> |fa:fa-ethernet Port 2| dut1
usbhub <--> |fa:fa-usb Port 1| dut2[fa:fa-mobile-phone DUT 2]
The CI-tron gateway container is meant to be run on the gateway machine of
a CI farm, and comes with the following requirements:
Network: One Ethernet network interface, connected to the internet, with the
following optional interfaces:
WiFi adapters: Will be used to create a WiFi network with the wanted
SSID and passphrase, and then added to the mgmt network. This is meant
for PDUs (see Adding the PDU to your CI-tron instance), or allowing local SSH access.
Ethernet adapters: Any extra Ethernet network interface will be added
to the private network, which is used to connect test machines to the
CI-tron gateway.
Volumes: The container requires a volume to store persistent data
(mounted at /config), and optionally a temporary volume
which acts as a cache across reboots (mounted at /cache).
Container: The container provides network services for multiple network
protocols (DHCP, DNS, TFTP, HTTP, …), and thus requires to be run as
a privileged container using --networkhost, or the following:
CAP_AUDIT_CONTROL: For logind, to be able to log-in using SSH
Network:
--networkbridge: The network the container uses to access the
internet , AKA the public network;
podmannetworkcreate-dmacvlan-oparent=$PRIVATE_NIC-omode=passthru-ono_default_route=1--ipam-driver=noneci-tron-private-nic1
and --networkci-tron-private-nic1: The private interface you want
to share to use for testing (repeat as needed).
The latest CI-tron container can be accessed using
registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest, but you may want to
pin it to a certain version. Until we start creating proper releases, you will
find all the different versions of CI-tron in our
container registry.
CI-tron is meant to provide a plug-and-play experience, on a dedicated machine.
As such, it is best deployed as the sole application on a dedicated machine.
It can however operate with a reduced feature-set on already-existing
infrastructure if needs be.
Assuming you already internet connectivity in your dedicated machine, the
container should be executed in this fashion, all as root:
# FARM_NAME=myfarm
# # Create the volumes that will hold the config and caches# podmanvolumecreateci-tron-config
# podmanvolumecreateci-tron-cache
# podmanrun--rm-ti--pull=always--privileged\--nameci-tron--networkhost--hostname$FARM_NAME-gateway\-vci-tron-config:/config-vci-tron-cache:/cache\registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest
If you do not have an operating system installed on your gateway, consider using
Boot2container, with the following configuration expressed as an iPXE
script:
#!ipxe
# Set the defaults for the gateway
set farm_name my_farm
set gateway_cache_device auto
set gateway_extra_cmdline
set gateway_img_path registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest
set default_cmdline initrd=initrd.magic net.ifnames=0 b2c.ntp_peer=auto b2c.pipefail b2c.swap=32G b2c.volume="ci-tron-cache" b2c.volume="ci-tron-config" b2c.hostname=${farm_name}-gateway b2c.run="-ti --pull=always -v ci-tron-cache:/cache -v ci-tron-config:/config -v /dev:/dev -v /lib/modules:/lib/modules -v /lib/firmware:/lib/firmware docker://${gateway_img_path}" b2c.cache_device="${gateway_cache_device}" b2c.shutdown_cmd="reboot -f" ${gateway_extra_cmdline}
kernel https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64 ${default_cmdline}
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/initramfs.linux_amd64.cpio.xz
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.depmod.cpio.xz
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.gpu.cpio
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.wifi.cpio
initrd https://gitlab.freedesktop.org/gfx-ci/boot2container/-/releases/v0.9.16/downloads/linux-x86_64.ucode.cpio # For Intel-based machines
boot
Rather than embedding the above script in iPXE directly (see iPXE Crypto for
more details), you could store it in a public git repository and then source it
from another embedded script which would never need to change:
#!ipxe
set boot_script_url https://gitlab.freedesktop.org/mupuf/ci-tron-boot-cfg/-/raw/main/my_farm.ipxe
echo Welcome to the CI-tron iPXE boot script
:retry
echo Acquiring an IP
dhcp || goto retry # Keep retrying indefinitely
echo Got the IP: ${netX/ip} / ${netX/netmask}
echo
echo Updating the current time using NTP
ntp pool.ntp.org || goto retry
echo Current unixtime: ${unixtime} (use `date --date=@$$((${unixtime}))` to decode it)
echo
echo Chainloading from the iPXE server ${IPXE_SERVER_FQDN} ...
chain ${boot_script_url} || goto retry
# The above command may "succeed" but we actually fail to
# boot. This could happen if the iPXE boot file is
# successfully returned, but the URLs to the kernel and
# ramdisk are invalid, for example. In cases like these,
# continuously retry the netboot, rather than exiting iPXE and
# potentially getting stuck indefinitely in the firmware's next
# boot method. The sleep acts as a simple rate limiter.
sleep 60
goto retry
You may perform all these steps by hands, or you can rely on our
iPXE boot server if you would like additional features such as automatic
backups of your config volume.
Once your container has booted and shows the dashboard, you will want to add
your SSH keys to /config/ssh/authorized_keys by killing the
dashboard (CTRL+C) then typing nano/config/ssh/authorized_keys.
$ # Create the volumes that will hold the config and caches$ podmanvolumecreateci-tron-config
$ podmanvolumecreateci-tron-cache
$ podmanrun--rm-ti--pull=always\--cap-addCAP_NET_ADMIN\--cap-addCAP_NET_RAW\--cap-addCAP_AUDIT_CONTROL\--device/dev/fuse\--device/dev/net/tun\--device/dev/kvm\--nameci-tron--hostnamevivian-mupuf-gateway\-vconfig:/config-vcache:/cache\--networkbridge\-p60022:22\-p8000:80\-p8100:8100\registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest
Once your container has booted and shows the dashboard, you will want to add
your SSH keys to the container for remote access. You may do so by mounting the
config volume using podmanunsharepodmanvolumemountci-tron-config and
then editing $MOUNT_POINT/ssh/authorized_keys.
❌ Dynamic network interfaces (needed for ethernet over USB, for Fastboot devices)
❌ NFS / imagestore
# # Allow ip forwarding, since we can't do that from inside the container# # without requiring insane privileges. May not even be needed.# echo1>/proc/sys/net/ipv4/ip_forward
# Createthenetworks
# podmannetworkcreate-dmacvlan-oparent=$PRIVATE_NIC-omode=passthru-ono_default_route=1--ipam-driver=noneci-tron-private-nic
# podmannetworkcreate-dmacvlan-oparent=$MGMT_NIC-omode=passthru-ono_default_route=1--ipam-driver=noneci-tron-mgmt-nic
# Createthevolumesthatwillholdtheconfigandcaches
# podmanvolumecreateci-tron-config
# podmanvolumecreateci-tron-cache
# # TODO: add support for the USB-based PDUs# # TODO: Find an alternative to sharing /dev/, maybe through a local# # service that forwards consoles to SALAD using socat or similar# podmanrun--rm-ti--pull=always\--cap-addCAP_NET_ADMIN\--cap-addCAP_NET_RAW\--cap-addCAP_AUDIT_CONTROL\--cap-addCAP_SYS_TIME\--device/dev/fuse\--device/dev/net/tun\--device/dev/kvm\-v/dev:/dev\--nameci-tron--hostnamevivian-mupuf-gateway\-vci-tron-config:/config-vci-tron-cache:/cache\--networkbridge\--networkci-tron-private-nic\-p60022:22\-p8000:80\-p8100:8100\registry.freedesktop.org/gfx-ci/ci-tron/gateway:latest
Once your container has booted and shows the dashboard, you will want to add
your SSH keys to the container for remote access. You may do so by mounting the
config volume using podmanunsharepodmanvolumemountci-tron-config and
then editing $MOUNT_POINT/ssh/authorized_keys.
Right now, our gateway has no idea about any potential machine connected
to its private network interface. Let’s boot one!
If the gateway is running in vivian, then virtual DUTs can be booted by using
the vivian-dut make target. The OUTLET variable can be set to select the
outlet on the VPDU:
$ makeOUTLET=0vivian-add-dut
Alternatively, you can use the dashboard to add/start a virtual DUT by
selecting one with the arrow keys and submitting the DISCOVER button.
If all went well, you should now see a machine appear in the “Machines”
column of the dashboard, with the state DISCOVERING. After a short while, it
should transition into TRAINING.
During the TRAINING step, the machine will be booted in a loop for a set
amount of times in order to test the boot reliability. After the boot loop is
complete, the machine will be marked as ready for testing and its state should
change to IDLE.
Your first virtual test machine is now available for testing both locally, and
on your chosen Gitlab instance.
If all went well, congratulations! You seem to have a functional
setup! Most of the steps above are amenable to further configuration.
You are now in a position to play around and modify defaults to your
testing requirements.
After making changes to Ansible or any other component of the
gateway image, it is recommended to test the changes by either
re-generating the gateway image, or running the ansible playbook
against an already-running vivian instance:
$ makevivian-provision
If you want to iterate faster, you may limit the ansible job to some roles
by using the TAGS= parameter:
V=1 Turn on more verbose logging messages in the build process
ANSIBLE_EXTRA_ARGS="-vvv..." Pass any custom flags to
ansible-playbook. Helpful for re-running only tagged roles in
the ansible build, for example. You can also use this to override variables
used by the ansible playbook, for example: ANSIBLE_EXTRA_ARGS="-efoo=bar"
IMAGE_NAME=localhost:8088/my/image
The container name to tag the image with. WARNING: The image
will automatically be pushed to the registry that got tagged!
Defaults to localhost:8088/gfx-ci/ci-tron/gateway:latest.
Once completed, a container image will be generated, for example,