OpenStack part 7: Storage (cinder)

Installation of cinder was quite straightforward. I created a new VM with the same specifications as on the network node, and this will be my first storage node.

I assigned a 250GB logical volume to the VM in libvirt. And once booted I inserted that disk into another LVM volume group, so I can assign it to cinder for creation of the volumes.

I had to set up the networking (just 1 interface is needed), the ntp, and add the proper apt repositories, just like the other machines.

Installation of cinder went without a hitch: guide. And that’s all there is to that.

OpenStack Part 6.5: Adding disks

As announced in the previous post, I bought 4 used disks from eBay and inserted them into the machine. This is the synopsis of creating a RAID5 volume, put it into an LVM volume group, and assign that VG to libvirt. Libvirt can then create logical volumes in that VG, and attach them the virtual machines.

A Debian configuration guide I followed:

Checking the presence of the new SATA disks in the system:
root@PANICLOUD:/home/nicky# lsblk
NAME                            MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
fd0                               2:0    1     4K  0 disk
sda                               8:0    0 232.9G  0 disk
└─sda1                            8:1    0 232.9G  0 part
└─md0                           9:0    0 465.5G  0 raid0
├─md0p1                     259:0    0 243.1M  0 md    /boot
├─md0p2                     259:1    0     1K  0 md
└─md0p5                     259:2    0 465.3G  0 md
├─PANICLOUD--vg-root      253:0    0   9.3G  0 lvm   /
├─PANICLOUD--vg-swap_1    253:1    0  18.1G  0 lvm   [SWAP]
├─PANICLOUD--vg-home      253:2    0   100G  0 lvm   /home
├─PANICLOUD--vg-images    253:3    0   100G  0 lvm   /opt/images
└─PANICLOUD--vg-instances 253:4    0   100G  0 lvm   /opt/instances
sdb                               8:16   0 232.9G  0 disk
└─sdb1                            8:17   0 232.9G  0 part
└─md0                           9:0    0 465.5G  0 raid0
├─md0p1                     259:0    0 243.1M  0 md    /boot
├─md0p2                     259:1    0     1K  0 md
└─md0p5                     259:2    0 465.3G  0 md
├─PANICLOUD--vg-root      253:0    0   9.3G  0 lvm   /
├─PANICLOUD--vg-swap_1    253:1    0  18.1G  0 lvm   [SWAP]
├─PANICLOUD--vg-home      253:2    0   100G  0 lvm   /home
├─PANICLOUD--vg-images    253:3    0   100G  0 lvm   /opt/images
└─PANICLOUD--vg-instances 253:4    0   100G  0 lvm   /opt/instances
sdg                               8:96   0 931.5G  0 disk
sdh                               8:112  0 931.5G  0 disk
sdi                               8:128  0 931.5G  0 disk
sdj                               8:144  0 931.5G  0 disk


Creating the RAID5 volume:
root@PANICLOUD:/home/nicky# mdadm --create /dev/md1 --level=5 --raid-devices=4 /dev/sdg /dev/sdh /dev/sdi /dev/sdj

Creating the LVM phyiscal volume:
root@PANICLOUD:/home/nicky# pvcreate /dev/md1
Physical volume "/dev/md1" successfully created
root@PANICLOUD:/home/nicky# pvdisplay
--- Physical volume ---
PV Name               /dev/md0p5
VG Name               PANICLOUD-vg
PV Size               465.28 GiB / not usable 2.02 MiB
Allocatable           yes
PE Size               4.00 MiB
Total PE              119110
Free PE               35298
Allocated PE          83812
PV UUID               UdCEwD-mlv1-EuIw-L0jc-lrgK-QGjX-RM1AdD

"/dev/md1" is a new physical volume of "2.73 TiB"
--- NEW Physical volume ---
PV Name               /dev/md1
VG Name
PV Size               2.73 TiB
Allocatable           NO
PE Size               0
Total PE              0
Free PE               0
Allocated PE          0
PV UUID               FRsRMz-6HNs-S6d6-QXOw-beOf-DbB8-MGuLu1

Creating the volume group:
root@PANICLOUD:/home/nicky# vgcreate PANICLOUD_STORAGE-vg /dev/md1
Volume group "PANICLOUD_STORAGE-vg" successfully created
root@PANICLOUD:/home/nicky# vgdisplay
--- Volume group ---
VG Name               PANICLOUD-vg
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  9
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                5
Open LV               5
Max PV                0
Cur PV                1
Act PV                1
VG Size               465.27 GiB
PE Size               4.00 MiB
Total PE              119110
Alloc PE / Size       83812 / 327.39 GiB
Free  PE / Size       35298 / 137.88 GiB
VG UUID               nWgVHx-Xuq9-AGAh-iI6g-4DoR-HcIn-Nq1WhR

--- Volume group ---
VG Name               PANICLOUD_STORAGE-vg
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  1
VG Access             read/write
VG Status             resizable
MAX LV                0
Cur LV                0
Open LV               0
Max PV                0
Cur PV                1
Act PV                1
VG Size               2.73 TiB
PE Size               4.00 MiB
Total PE              715305
Alloc PE / Size       0 / 0
Free  PE / Size       715305 / 2.73 TiB
VG UUID               AmZ7Yx-4p37-ZgLy-wIvC-1pOU-wdwP-JK64m1

Make sure that afterwards you run “updateinitramfs -u” so that the RAID array is assembled again after a reboot of the machine.

This guide summarizes the process to assign the volume group to libvirt with virt-manager.

OpenStack part 6: Networking and problem solving

I found a second networking guide, where I followed the instructions for the classic openvswitch implementation and got it up and running.

    I’ve done all this for neutron:

  • Disable libvirt networking on the compute node: guide
  • , this wasn’t really necessary, but to avoid confusions with the virbr0 networks the libvirt creates.

  • Create the basic networks in openvswitch and configure the proper interfaces.
  • Connect to the internet.

After all this I still wasn’t able to launch instances via the dashboard:

  1. Configuration errors: it is very easy to make alot of typo’s 🙁
  2. After that it seems like creating an instance fails because no volume could be created, so bought four used hard drives from eBay, insert them in the OpenStack machine and created a RAID5 volume. I installed cinder on a seperate VM and assigned it 250GB. Volumes can be created now, but still errors when trying to create an instance. I will create a seperate post an that subject.
  3. I’ve now been searching for the cause of the failures, and I believe it is somewhere located in the network configuration. It can create ports on the proper networks, and it fails because of that. A few things that I’ve found, besides more typo’s:
    • The documentation says to set interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver in /etc/neutron/l3_agent.conf and /etc/neutron/dhcp_agent.conf, but this class can’t be loaded. I has to dig through the neutron python code, and found another way: interface_driver = openvswitch. It is an alias that does seem to work. I think this is a regression in the Newton code, as the code should be backwards compatible with the class path.
    • I also have to have the ip_gre, 8021q and vxlan kernel modules loaded. These are essential if you want to create GRE/VXLAN tunnels or VLAN networks. Just modprobe them, and put them in /etc/modules.
    • Still not working though, but now I do see the dhcp server plugged in on the network, so progress…

OpenStack Part 4: Preparing the nodes

I’ll be starting with setting up a general purpose OpenStack environment as described in the architecture design description. Only the most basic of components will be considered at this point:

I’ll be skipping Object storage (swift) and Block storage (cinder) for now.
Ceilometer is also optional, but will be included to collect usage data for learning purposes.

Several virtual machines will be set up:

  • A controller node: 1 CPU, 4GB RAM, 20GB storage
    • Identity (keystone)
    • Dashboard (horizon)
    • Telemetry (ceilometer)
    • Image service (glance)
    • This node will have 2 networking interfaces:
      • Management network
      • Tunnel network
  • A networking node: 1 CPU, 4GB RAM, 10GB storage
    • Networking (neutron)
    • This node will have 3 networking interfaces:
      • Management network
      • Tunnel network
      • The Internet
  • And a compute node: 2 CPU, 8GB RAM, 20GB storage
    • Compute (nova)
    • This node will have 2 networking interfaces:
      • Management network
      • Tunnel network

Sizing and resizing of these virtual machines can be done later on with:

I’ve decided to go for an Ubuntu Server 16.04 installation for the OpenStack virtual machines, because it is claimed to have better support for the OpenStack components than debian.

Three networks will be created:

  • Management network: NAT to eth0 on the host machine
  • Tunnel network: host only network bridge
  • Internet: directly assigned to networking node (if possible)

Following the installation guide here, I installed three Ubuntu virtual machines, version 16.04.1 LTS.  I’ve set up the provider network, configured DNS,  NTP, and added the Openstack apt repository.  Do note that the install guide is specified for Openstack version Mitaka, Ubuntu Xenial will only support Openstack Newton, which is still under development at this moment.

OpenStack Part 3: DevStack

I’ve been trying DevStack on Ubuntu and Debian, just to get a feel with what an OpenStack installation would require from a VM (diskspace and the like).

Lessons learned:

  • Use static ip addresses on your VMs, changing them later on means updating all the entries in the database that contain this IP address.
  • One does not simple reboot a devstack machine! Before the reboot:, reboot, afterwards Even though, most of the times the cinder volume service doesn’t seem to come back online after the rejoin-stack, and I’ll have to run to make it work again. I’m not sure whether this is side effect of the devstack installation or whether this is an openstack thing, I would’ve hoped that starting and stopping virtual machines with the properly configured services would just register themselves automatically. (Update: it appears that the openstack services under the devstack tool are started in a screen session, so that yet another tool I need to learn to master if I want to progress in investigating this implementation further. But it does imply that this reboot behavior is not indicative of openstack, but only of devstack.)
  • I still have a lot to learn about what all these things actually are. The dashboard isn’t really making it any easier…


OpenStack Part 2: Virtual machines

OpenStack is a Cloud Framework, it doesn’t define the system, it is just a collection of cloud tools that integrate together into whatever cloud system you need. As the OpenStack Architecture Design Guide suggests, there are a multitude of possibilities.

My idea is to go through that list one by one, unlocking more services and growing my little cloud as I go further. First up is a general purpose cloud. This includes the most basic OpenStack components, and should allow me to launch virtual machines, configure some basic networking, file and object storage, etc. This is basically an IaaS (Infrastructure as a Service) model.

At this point my idea is to run OpenStack in different VMs on my physical machine. This way, if something goes wrong and I mess up the installation too much, I can just restore a VM and start over. Additionally I can also create compute nodes at will (as far as the HW will allow it).
This means I’m going to need nested KVMs.
This short guide told me how to enable it. More information can be found here.

This is what you need to end up with:
$ cat /sys/module/kvm_intel/parameters/nested

Creating virtual machines:
$ sudo apt-get install qemu-kvm libvirt-bin
$ sudo adduser panic1 kvm
$ sudo adduser panic1 libvirtd
$ echo "LIBVIRT_DEFAULT_URI=\"qemu:///system\"" >> .profile
$ sudo vim /etc/default/libvirt-guests

From this point forward I used the virt-manager application to create a VM (make sure you select to copy the host cpu information), and install a minimal debian or ubuntu on it. Repeat the same steps needed to enable nested VMs.

The same can ofcourse be achieved just the same with VMs in VirtualBox, running on a Windows machine.

OpenStack Part 1: Installing Debian on a RAID volume

For a quite a while I’ve had an old desktop machine gathering dust in a corner. It hasn’t been powered up in about a year, and I’ve even been thinking about what I could still do with it. The machine itself is still quite capable, it has a first generation core i7 CPU at 2.8GHz, and 4GB of RAM.
And recently a new option presented itself, my employer is starting up self-study gatherings and knowledge sharing sessions about “Cloud”. They’ve been doing these kinds of knowledge sharing gatherings for everything Linux Kernel for quite a while, the quadcopter was a project that I’ve started in light of that Linux Kernel knowledge sharing.
For this Cloud knowledge sharing I was thinking to set up OpenStack on this desktop machine. I ordered some more RAM, and a couple of SSDs to upgrade the machine so it can handle a few virtual machines.

There are couple of options when it comes to virualization, there are hypervisors like Hyper-V from Microsoft, ESXi from VMWare, XEN from Citrix, and there are virtualization tools like Qemu, KVM, VirtualBox, VMWare Player. From all this, only two options that are free, open source, and perfectly supported by OpenStack, stand out: Xen and KVM. Qemu fits into those requirements too, but this is an emulator, and performance is quite low, but fun can be had with this for very specific purposes later. I’ve decided to go with KVM running of off Debian, just because…

So there I go, installing Debian. I’ve configured the BIOS of my desktop to create a RAID0 volume with the two SSDs I bought, and followed this guide.
It took me alot longer than it should’ve but eventually it worked when I just put everything in one partition, without LVM (I didn’t know what it was at first).

I wasn’t too happy with this, I really like the idea of having my /home folder in a separate partition. And I’ve been reading up on this LVM thing, and this seemed like something that could come in really handy. LVM creates a layer on top of physical disks/partitions: a volume group. The volume group can be dynamically extended or reduced at will. Within that volume group you can create logical volumes, which are also easily extended/reduced. Within such a logical volume you typically have your favorite file system.

OK, now things start getting hairy, when setting up those LVM volumes during the Debian install everything that guide says still seems to apply. After that rescue step to install Grub, I reboot the machine. Grub comes up, Debian starts to boot, but ends up in an emergency mode:
Welcome to emergency mode! After logging in, type "journalctl -xb" to view system logs
The logs says something like:
debian systemd[222]: Failed at step EXEC spawning /bin/plymouth: No such file or directory

Plymouth seems to have nothing to do with the error here, plowing through the logs I noticed that all the disk checks on those LVM partitions are failing, and I believe this is more likely the cause, but I haven’t investigated this any further.

I came across this link instead. I reverted the RAID configuration in the BIOS, and I went with the software RAID option instead. This worked, Debian boots up completely, I have RAID, LVM.

But I used the guided partitioning from the Debian installer, the /home partition was 470GB large because of this, and I’d rather have a separate partition for the images and instances for the virtual machines I’ll be setting up next. But resizing a logical volume does require that the volume is not used (it’s file system mounted), and unmounting a filesystem is only possible when it’s not in use by anything or anyone.

Crtl-Alt F1 gets you in a terminal, log in with your user credentials:
$ pwd
Normally when you are logging in as a user, the current working directory for the shell is your home dir
$ cd /
This changes the current working dir of the shell
$ sudo su
Become root
# systemctl isolate
This kills off the complete graphical desktop and all applications within
# fuser /home
Shows you which users are still using /home. Should return empty
# lsof /home
Lists open files. Should return empty
# umount /home
With no open files, /home can be unmounted
# lvreduce -L 100G -r /dev/PANICLOUD-vg/home
Resize the home volume and the filesystem to 100GB
# lvcreate -n images -L 100GB PANICLOUD-vg
Create new “images” logical volume
# lvcreate -n instances -L 100GB PANICLOUD-vg
Create new “instances” logical volume
# systemctl isolate
This will start the graphical desktop again

Within the graphical desktop I used the Disks utility to format and mount the volumes to /opt/images and /opt/instances respectively.

Quadcopter part 6: PWM

[Update] This was on kernel version 3.8.  It had to be completely redone when I performed an upstep to version 4.4.  I will put my findings in a separate post.

I’ve been working on this for a very long time. It was the first time I came into contact with Device Trees in the Linux kernel, and that was a steep learning curve all in its own right, but I also came across a lot technical difficulties and bugs in the beaglebone pwm driver code. Those definitely didn’t help. Here’s the story of enabling 4 PWM pins to do what I want…

Verifying that the PWM output can drive the Turnigy ESCs

The first experiment I wanted to do was to check whether the beaglebone PWM outputs can drive the Turnigy speed controllers. My worry was that the 3.3V PWM signal wouldn’t be enough to drive the 5V controllers.
So I used this guide to control a single (note: single, this will be important later…) PWM output. It was easy to set the period and duty cycle, I verified the timings on the oscilloscope, looking good. I hooked it up to a speed controller, and sure enough: nothing happened. The engine kept beeping happily (it’s an alarm indicating invalid input signal).

Before looking up and ordering a logic level shifter, or designing a transistor inverter circuit to boost the 3.3V to a 5V signal, I decided to give it a try with an Arduino board I had laying around. The Arduino has 5V PWM outputs. Hooking up the beaglebone, the Arduino and the ESCs, I thought it was a good idea to also connnect the grounds for all the devices together, just in case. It worked, the propeller spinning happily on all four engines. I corrected the direction of those propellors that were spinning backwards. In a last attempt I decided to give the beaglebone board just one more try, now that the grounds are all connected together. And that worked too!! Lesson learned: connect ground! Second lesson learned: don’t leave the propeller on the engine when doing tests, I had one of those props fly up to my face when I accidentally set the signal to full power. A quick calculation about the speed that prop reached at full rotational speed, encouraged me to be more careful (I would be stupid to ignore a sharp plastic blade flinging itself around at 500km/h).

Device tree files

To activate the PWM outputs, so that they can be controller from a C++ application, I needed to get to know the device tree overlays. Since the 3.8 kernel Torvalds disallowed the use of platform support code that was quickly flooding the kernel kernel sources. Device trees were to be used instead.
A device tree is a flat file detailing the entire hardware, were all the peripherals are located, which drivers to load, etc, etc…

In order to have some flexibility they are using device tree overlays that can be added on top of a base configuration.

In case of the beaglebone black board, these is a cape manager that can load these device tree overlays at runtime, and the debian image I installed on the beaglebone comes equipped with a full set of example precompiled overlays in /lib/firmware.

In order to load the PWM pins for the quadcopter we need to:
root@beaglebone:/lib/firmware# echo am33xx_pwm > /sys/devices/bone_capemgr.9/slots
root@beaglebone:/lib/firmware# echo bone_pwm_P8_13 > /sys/devices/bone_capemgr.9/slots
root@beaglebone:/lib/firmware# echo bone_pwm_P8_19 > /sys/devices/bone_capemgr.9/slots
root@beaglebone:/lib/firmware# echo bone_pwm_P9_14 > /sys/devices/bone_capemgr.9/slots
root@beaglebone:/lib/firmware# echo bone_pwm_P9_16 > /sys/devices/bone_capemgr.9/slots

and verify with:
cat /sys/devices/bone_capemgr.9/slots
Should return:
0: 54:PF---
1: 55:PF---
2: 56:PF---
3: 57:PF---
4: ff:P-O-L Bone-LT-eMMC-2G,00A0,Texas Instrument,BB-BONE-EMMC-2G
5: ff:P-O-L Bone-Black-HDMI,00A0,Texas Instrument,BB-BONELT-HDMI
8: ff:P-O-L Override Board Name,00A0,Override Manuf,am33xx_pwm
9: ff:P-O-L Override Board Name,00A0,Override Manuf,bone_pwm_P8_13
10: ff:P-O-L Override Board Name,00A0,Override Manuf,bone_pwm_P8_19
11: ff:P-O-L Override Board Name,00A0,Override Manuf,bone_pwm_P9_14
12: ff:P-O-L Override Board Name,00A0,Override Manuf,bone_pwm_P9_16

Or do automatic at bootup by modifying /boot/uboot/uEnv.txt
Add this to the bootargs (in my example I edited a line in the “Example” section):

Running into problems

In part 7 I’m explaining how to control the PWM outputs from a C++ application. But during this development I quickly ran into issues. I could control the duty cycle just fine, but the period was always set to 500µs, and couldn’t be changed because INVALID PARAM… It worked just fine with the python script just before.

I checked the code behind the python library, and compared it to the code in the BlackLic C++ library, but the implementation was the same. The pins were controlled by writing to set of files in /sys/devices/ocp3/pwm_test_P8_13.14/. Trying to write 20000000 into the period file will keep failing.

Hunting around the web I quickly found other people encountering the same issue. The PWM on the beaglebone is implemented on three distinct chips. Each of those chips have 2 ehrpwm (enhanced resolution) outputs. The period can only be managed on a per chip basis, and has to be the same for both outputs. However, the beaglebone pwm_test driver exposes both outputs separately, and asserts that any new configuration you want to apply to an output has to be same as the other (chicken or egg?). Maybe it is possible to disable both outputs, change the periods, and enable them again, I haven’t looked into that in greater detail.

The prevailing solution on the web would be to patch the pwm_test driver. The period would be set to 0 in the device tree overlay, and the test_pwm driver would interpret this value and not enable the output. That would allow you to change the period after boot. In my case I wasn’t really interested in being able to change the period, I just wanted it set to 20ms. And if the device tree overlays would allow me to do that, I wouldn’t even need the patch.

I found the device tree source files here:
But I discovered later that I could have also decompiled the precompiled dtbo files in the /lib/firmware folder.

I went ahead and changed the period in these files from 500000 to 20000000 (20ms):
root@beaglebone:~/dts# vim bone_pwm_P9_14-00A0.dts
root@beaglebone:~/dts# vim bone_pwm_P9_16-00A0.dts
root@beaglebone:~/dts# vim bone_pwm_P8_13-00A0.dts
root@beaglebone:~/dts# vim bone_pwm_P8_19-00A0.dts

Compile like this:
dtc -O dtb -o bone_pwm_P9_14-00A0.dtbo -b 0 -@ bone_pwm_P9_14-00A0.dts
dtc -O dtb -o bone_pwm_P9_16-00A0.dtbo -b 0 -@ bone_pwm_P9_16-00A0.dts
dtc -O dtb -o bone_pwm_P8_13-00A0.dtbo -b 0 -@ bone_pwm_P8_13-00A0.dts
dtc -O dtb -o bone_pwm_P8_19-00A0.dtbo -b 0 -@ bone_pwm_P8_19-00A0.dts

Copy to the dtbo files, but make sure you backup /lib/firmware first!
cp *.dtbo /lib/firmware

Rebooting the beaglebone, and still not working…. The period was still 500µs.

It took me a whole while before I figured out what was actually happening. During the search I decided to download and recompiled the kernel, just like the original article suggested. However the howto on rebuilding the kernel is not correct for the kernel image in the standard debian image, instead:
git clone
cd bb-kernel
git checkout 3.8.13-bone50
sudo apt-get install device-tree-compiler lzma lzop u-boot-tools libncurses5-dev:amd64 libncurses5:i38

In the folder KERNEL/firmware/capes I could find the same dts files and looking through the kernel code, it seemed like these device tree files are compiled and included in a binary blob somewhere in the kernel binary, instead of being read from /lib/firmware like everyone claimed it was.

I shared the bb-kernel folder over samba, so I could mount it on the beaglebone:
mkdir bb-kernel
mount -t cifs // ./bb-kernel -o user=nobody

I also had to change a few commands in bb-kernel/tools/
where you see:
sudo tar xf "${DIR}/deploy/${KERNEL_UTS}-dtbs.tar.gz" -C "${location}/dtbs/"
replace it with:
sudo tar xf "${DIR}/deploy/${KERNEL_UTS}-dtbs.tar.gz" --no-same-owner -C "${location}/dtbs/"

Now run ./tools/ on the beaglebone and the new kernel with the adapted device tree files will be installed on the beaglebone.

That did replace the modules folder, and made me lose my mt7601Usta driver for the wireless antenna. Instead of doing the complete local_install, I just copied the zImage I found the bb-kernel/deploy folder over the /boot/uboot/zImage in a clean debian installation. Reboot, and it worked!

Second thought… maybe you are supposed to copy the dts files into a file of your own, adapt as needed and try to push that into /lib/firmware. So I did a few more experiments:

  • Increase the version number (from 00A0 to 00A1), and hope that the system is intelligent enough to load the one with the highest version. It wasn’t
  • Renaming the file, and using the new name. I renamed bone_pwm_P8_13-00A0.dts to panic1_pwm_P8_13-00A0.dts, compiled it, put it in the /lib/firmware folder. And it does work when you echo the new name into the slots file, but the bootloader doesn’t seem to be able to access the files in /lib/firmware, putting the new name in the uEnv.txt file didn’t work.

Quadcopter part 5: Assembly

I finally managed to find some time to assemble the quadcopter. I haven’t got the right standoffs to be able to mount to the beaglebone black controller board onto the frame yet, but those should arrive in the mail pretty soon, so for now the quadcopter is still going nowhere.


The next part will be about controlling the PWM output, reading in the sensor data and assembling everything on the prototyping cape. I made a quick drawing, but I still need to experiment and see whether these circuits will work.


Quadcopter part 3.3: Wireless (finally)

Those LogiLink wireless dongles were going nowhere fast, I gave one to a colleague of mine, maybe he has a bit more luck with it. So I decided to count my losses and go with one of those USB wireless interfaces listed on the beaglebone site, and I ordered the UWN200 from Logic Supply. I reverted my beaglebone board to the standard debian image, because I wanted to start afresh and the module worked straight out the package, just like that. The interface is now called ra0 instead of wlan0, but that was not totally unexpected.

The big antenna is also an added bonus, as I expect it to have a longer range for the quadcopter. But reading up on the internet, I do expect that running this as an access point is not possible with the current ralink driver. I will have to find a different solution for that, maybe I’ll set up a wireless AP on my laptop instead.

I measured a 100s average of 24.5MBits/s with iperf, which is just wonderful!