Mer OBS VM Setup Guide

Adapted for wiki from: http://slaine.org/files/OBS_Setup.txt Author: Glen Gray  Date: 18th Oct 2011

= Introduction =

Mer aims to be an open, inclusive, meritocratically governed and openly developed Core optimized for HTML5/QML/JS, providing a mobile-optimised base distribution for use by device manufacturers. The Core is based upon the work from the MeeGo project and hopefully, over time, to be sharing effort together with the Tizen project.

As with Meego, Mer is built upon the Open Build Service infrastructure. This guide will cover setting up an OBS instance that can by used to rebuild Mer and be hosted on an internal network for building Mer images (another guide).

= Goals =

The aim of this guide is to turn a desktop PC into an OBS build machine for Mer. I'll detail my own setup here, but it should be usable by most experienced users to create a build environment of their own. At the end of this guide you should have a PC that's acting as a VM server to 2 VM's. One acting as a seed for the OBS build environment. The VM's will use RAW lvm partitions as their storage medium. This gets us fairly close to native disk i/o performance.

= Requirements =

OBS can seem large and intimmidating to get started with. However, a resonably modern PC with VT extensions should be able to act as a sufficent build host. For my own setup, I've a Core2Duo E8400 running at 3GHz. I've also got 4Gb RAM and 2*500 GB drives. The drive sizes and setup are not especially important but it's recommended to have about 100GB available for the OBS instance.

If you're not setting up your PC from scratch and are happy to use disk images then skip the 'Buildhost VM Server' section and start reading at 'Creating your VMs'. You will need to adjust the details relating to using lvm paritions if you're not going to use those.

= Buildhost VM Server =

This setup assumes you're setting up the PC from scratch. Take this information as a guide to customizing your existing setup if you don't want to start from scratch. I chose Ubuntu Server as my VM host OS.

NOTE* I had to upgrade my BIOS as even though it said VT was enabled, it wasn't according to the kernel.

The setup is as follows

1) Get the 64bit Ubuntu Server release (11.10 at time of writing) and start the installation process. 2) When formating the drives I used the following layout a) Drives detected as /dev/sda and /dev/sdb  b) Created a 1Gb partition on both drives but formatted them differently /dev/sda1 ==> EXT3 Format, mount as /boot /dev/sdb1 ==> SWAP Format c) Create a Software RAID Partition filling the remainder of space on both drives  d) Create a RAID1 device from both soft raid partitions e) Create an LVM Group on /dev/md0 (the RAID1 device), call it "buildhost"  d) Create a Logical Volume on the LVM group of about 20Gb e) Format and mount the Logical Volume from the LVM group as /  f) Finish the installation, when given the option to choose it's purpose, select the OpenSSH Server and Virtual Machine Host options g) After installation, reboot as directed and login as the standard user.     NOTE* I also installed the Ubuntu Desktop meta package to get a GUI.   h) As the standard user, login is as root with 'sudo su -' and set roots password to one of your choosing. i) As root still, install the following extra packages.     aptitude install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils   i) As root still, go to /etc/network and open interfaces for editing. Change eth0's entry from iface eth0 inet dhcp to     iface eth0 inet manual Then add the following section auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off j) As root still, run 'apt-get dist-upgrade' and reboot after applied 3) ssh into the server, the br0 device should be up with an IP from the LAN's dhcp server. Obviously if there was a problem, you'd need direct console access to diagnose. root@buildhost:~# ifconfig br0      Link encap:Ethernet  HWaddr 00:1c:c0:f2:09:b4 inet addr:192.168.2.46 Bcast:192.168.2.255  Mask:255.255.255.0 inet6 addr: fe80::21c:c0ff:fef2:9b4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1 RX packets:268802155 errors:0 dropped:0 overruns:0 frame:0 TX packets:5539686 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:358314867827 (358.3 GB) TX bytes:7841019911 (7.8 GB) eth0     Link encap:Ethernet  HWaddr 00:1c:c0:f2:09:b4 inet6 addr: fe80::21c:c0ff:fef2:9b4/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500  Metric:1 RX packets:278074330 errors:0 dropped:0 overruns:0 frame:0 TX packets:10078391 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:375951939156 (375.9 GB) TX bytes:8158389690 (8.1 GB) Interrupt:27 Base address:0x4000 4) If you're using a user account on the server that wasn't created in step 2) above then you'll need to add your current user to the kvm and libvirtd groups. sudo adduser `id -un` kvm sudo adduser `id -un` libvirtd

5) Verify that the VM services are working, if you get an error instead of the below then check step 4)

glen@buildhost:~$ virsh -c qemu:///system list Id Name                State --

6) Install the GUI tools for managing the VM's  sudo apt-get install virt-manager

= Creating your VMs =

For the VMs I've chosen Fedora 14 Live CD for the Mer Fakeobs.

= Mer FakeOBS VM =

1) Download/scp over the Fedora 14 Live CD iso to the buildhost. 2) If logging in remotely over ssh, using the -Y ssh option to allow X   forwarding to your local box 3) Create an LVM Logical volume for the VM. I use a sane name for the volume and  the groupname when the lvm group was created, in step 2e) above I chose buildhost sudo lvcreate --name mer_vm --size 40G buildhost 4) Run 'virt-manager' 5) In the Virtual Machine Manager window, double click on the 'localhost (QEMU) - Not Connected' entry to connect 6) In the toolbar, press the first icon     (tooltip says 'Create a new virtual machine')   On the wizard screens, do the following.   a) Name the VM accordingly and choose 'Local install media (ISO...'   b) On the next screen, choose the 'Use ISO image', click the browse button, click 'Browse Local' on the bottom left of the next screen, navigate to      where your Fedora Live iso is located and double click on the iso filename. Set the OS type to Linux and Version to appropriate then click Forward c) On the CPU and Memory resources screen, set what you feel is appropriate.      For my setup as detailed at the top of this document, I selected 1024 MB       ram and 2 CPU's. Click Forward   d) On the storage screen, click the 'Select managed or other existing storage' option and enter '/dev/buildhost/mer_vm'. The path template is /dev/ / 3) On the final summary screen, drop down the 'Advanced options' and make      sure that the Host device drop down list is pointing to the br0 device      'Host device vnet0 (Bridge 'br0)'      Also make sure that 'Virt Type' is set to 'kvm' and that 'Architecture'       is i686      Click Finish.      This will boot the VM and use the Live ISO as the normal.

= Setting up the Fakeobs VM =

1) Once the Login screen is present, login as the live user 2) Double click on the 'Install to Hard Drive' desktop icon 3) Following the Wizard  a) Intro screen, click Next b) Keyboard, choose as appropriate, click Next  c) Device type screen, choose 'Basic Storage Devices' and click Next d) When presented with block devices to use, choose 'Virtio Block Device'.      Should be the 40Gb lvm volume we created in step 3) under "Mer FakeOBS VM" above. Click the check box beside the device and then click Next e) If this is the first install to that volume, you'll see a warning dialog      because the device is blank. Select the 'Re-Initialize' button   d) On the next wizard screen, choose the hostname (I chose mervm. ), click next f) Select your nearest city from the map or the drop down list and click next     (System clock uses UTC should be checked by default)   g) Put in your root password when prompted and press Next (pressing return after each password will automatically go to the next       screen) h) Select 'Use All Space' option for how to layout the drives. If you've got      specific layout requirements here check the       'Review and modify partitioning layout' option box and click next. I used      the installers defaults   i) Warning dialog present about writing to disk, click the 'Write changes to disk' The partitions will be created and formatted at this point and then the OS will be installed. j) Assuming all went well, you'll get a 'Congratulations....' message.      Close the dialog. Go to the live Desktops 'System->Shutdown...' then       press 'Restart' from the presented dialog.   k) On first boot, you'll be greated with the firstrun dialog . Click Forward on the welcome screen . Click Forward on the License screen . Create a user with the usual details on the 'Create User' screen . On Date and Time, sync with the internet option and click next . On the hardware profile, select 'Send Profile' and 'click Finish' 4) Login as the user on the GDM screen 5) Open a terminal and login as root and setup the your user with sudo rights. For a buildhost, I always choose the NOPASSWD variant. su - visudo Then logout as root and close the terminal window. 6) I typically log in via SSH, if you do the same, you can make some tweaks  Open a new terminal window, sudo changes will be applied for this one.    sudo chkconfig sshd on   sudo chkconfig iptables off   sudo vi /etc/inttab, change from runlevel 5 (X Windows) to runlevel 3) will use up less resources when your VM is up. 7) Install some build tools and update your OS  sudo yum groupinstall -y "Development Tools" "Development Libraries"   sudo yum upgrade -y 8) With the updates installed, reboot the machine and login 9) Install pip-python so that we can install some pypi packages.  sudo yum install python-pip 8) Now install some Pypi packages. sudo pip-python install GitPython (should pull down GitPython 0.3.2.rc1, as well as deps gitdb, async, smmap) 9) Create a directory to store the mer files, I created   mkdir -p Projects/mer; cd Projects/mer 10) git clone http://monster.tspre.org:8080/p/mer/release-tools 11) cd release-tools; make   Note: You might need to set rsync proxy:    export RSYNC_PROXY=proxy.company.com:8080

This will rsync all the package git repos and clone Core git and check it    out and make a mappings cache of git repos (will md5sum all sources). It    will take a while.

12) cd obs-repos; rsync -aHx --progress rsync://monster.tspre.org/mer-releases/obs-repos/* .   This step will take a while depending on your network connection. 13) cd ..; python tools/fakeobs.py 8001 & 14) From another machine you can verify that your fakeobs is up by doing   curl http://mervm.labs.lincor.com:8001/public/source/Core:i586/acl glen@obsvm:~> curl http://mervm. :8001/public/source/Core:i586/acl           

Your Mer Fake OBS is up and running now. NOTE: I found I was having OOM issues with both VM's running. I dropped the VM's allocated RAM to just 128MB as all it's doing at this point is     running fakeobs. [root@mervm ~]# chkconfig avahi-daemon off [root@mervm ~]# chkconfig bluetooth off [root@mervm ~]# chkconfig cups off [root@mervm ~]# chkconfig gpm off [root@mervm ~]# chkconfig ip6tables off [root@mervm ~]# chkconfig iscsi off [root@mervm ~]# chkconfig iscsid off [root@mervm ~]# chkconfig pcscd off [root@mervm ~]# chkconfig sendmail off [root@mervm ~]# chkconfig smolt off

= Local OBS VM =

1) Download the latest OBS appliance iso to the buildhost (from http://download.opensuse.org/repositories/openSUSE:/Tools/images/iso/).

2) If logging into the buildhost remotely over ssh, using the -Y ssh option to allow X forwarding to your local box for the Virt manager app.

3) Create an LVM Logical volume for the VM. I use a sane name for the volume and the groupname when the lvm group was created, above I chose 'buildhost'  sudo lvcreate --name obs_vm --size 100G buildhost

4) Run 'virt-manager'

5) If it doesn't automatically connect, in the Virtual Machine Manager window, double click on the 'localhost (QEMU) - Not Connected' entry to connect

6) In the toolbar, press the first icon (tooltip says 'Create a new virtual machine'). On the wizard screens, do the following.  a) Name the VM accordingly and choose 'Local install media (ISO...'   b) On the next screen, choose the 'Use ISO image', click the browse button, click 'Browse Local' on the bottom left of the next screen, navigate to      where your obs-server-install iso is located and double click on the iso filename. Set the OS type to Linux and Version to appropriate then click Forward c) On the CPU and Memory resources screen, set what you feel is appropriate.      For my setup as detailed at the top of this document, I selected 2048 MB       ram and 2 CPU's. Click Forward   d) On the storage screen, click the 'Select managed or other existing storage' option and enter '/dev/buildhost/obs_vm'. The path template is /dev/ / e) On the final summary screen, drop down the 'Advanced options' and make      sure that the Host device drop down list is pointing to the br0 device      'Host device vnet0 (Bridge 'br0)'      Also make sure that 'Virt Type' is set to 'kvm' and that 'Architecture'       is i686      Click Finish.      This will boot the VM and use the Live ISO as the normal.

= Setting Up the OBS Install =

The obs installation is pretty self contained, just follow the onscreen prompts

a) For some reason, the partitioning script barfs when it's trying to resize the filesystem. I had to boot into a live cd iso and run gparted to get the disk resized to the full 100G I'd allocated in the VM. It appears to be a detection of heads/cylinders issue with a parted script and the /dev/vda b) ssh won't start by default. When the VM boots, at the console login as root. You have to change /var/lib/empty to be owned by root. Then you can enable ssh by default. chown root:root /var/lib/empty chkconfig sshd on   service sshd start edit /etc/sysconfig/obs-server, change OBS_SCHEDULER_ARCHITECTURES from OBS_SCHEDULER_ARCHITECTURES="i586 x86_64" to    OBS_SCHEDULER_ARCHITECTURES="i586 x86_64 armv7el armv8el" c) Add a new user for yourself, I used the same name here as I did for the OBS webUI, but it makes no difference, the webUI user is virtual.    useradd      passwd      mkhomedir_helper d) Update the installation zypper refresh zypper dist-upgrade reboot

1) Open a browser and go to the IP/hostname of the OBS VM on port 81 http://obsvm. :81, defualt username is Admin, opensuse is the password.

2) Select "User Management", then "New user" to create a new admin level user for yourself (make sure you select "confirmed" as the user state.

3) Redirect the browser to the default port 80 of your OBS vm  http://obsvm.

4) Login as your newly created user

5) Click the "Setup OBS" button. This will present a present a filled out form that creates a remove openSUSE build service instance. We need to change the details here so that it points to your local fakeobs as created above  Local Project Name: fakeobs   Remote OBS api url: http://mervm. :8001/public   Title: fakeobs   Description: Mer seed Then click the save changes button.

6) ssh onto the OBS vm from another machine

7)  osc config This will ask for username and password for connecting to api.opensuse.org a remote server. Fill in your username and password for your OBS instance as created in ste 2) above. Then edit .oscrc and change Line 4 "apiurl = https://api.opensuse.org" to    "apiurl = http://obsvm. :81" Line 104 "" to      ":81" 8) To verify that your console osc environment is correct, you can list the projects with the command "osc ls"

glen@obsvm:~> osc ls WARNING: SSL certificate checks disabled. Connection is insecure!

fakeobs

8) Test checking out some code  osc checkout fakeobs:Core:i586 acl   If it succeeds, you should see glen@obsvm:~> osc checkout fakeobs:Core:i586 acl WARNING: SSL certificate checks disabled. Connection is insecure!

A   fakeobs:Core:i586 A   fakeobs:Core:i586/acl A   fakeobs:Core:i586/acl/acl-2.2.49-build.patch A   fakeobs:Core:i586/acl/acl-2.2.49-multilib.patch A   fakeobs:Core:i586/acl/acl-2.2.51.src.tar.gz A    fakeobs:Core:i586/acl/acl.changes A   fakeobs:Core:i586/acl/acl.spec At revision 2.

9) Next, we'll create a local project for your user that incorporates the arch  packages from the fakeobs vm which is acting like a seed server. Again,   we'll use the i586 arch for this step, but there are other archs hosted.   osc meta prj home:USERNAME:i586 -e

The XML should look along these lines (make sure you use your own "USERNAME"  value) paying special attention to the tags:

 USERNAME's i586 Project i586 optimized build of Mer     i586

This can also be edited in the web interface, under your project -> Advanced -> Raw config -> Edit

11) Verify that we can import packages from fakeobs into home:USERNAME:i586   osc copypac fakeobs:Core:i586 acl home:USERNAME:i586

glen@obsvm:~> osc copypac fakeobs:Core:i586 acl home:glen:i586 WARNING: SSL certificate checks disabled. Connection is insecure!

Sending meta data... Copying files... 9492a3e07c2a672b8b86932dbcf54aa5 2.2.51  1318937695   glen osc copypac from project:fakeobs:Core:i586 package:acl revision:2

You can then point your browser to http://obsvm. /monitor to    see the usage graph as it compiles up the acl package.

12) For arm, repeat steps 8) to 11), replacing i586 with armv7hl. For step 9) use the following project structure. Note that the field is set to   armv8el, this is an obs tag to represent armv7 with hardfp, it should be set up as per step b) at the beginning of this section.

 USERNAME's ARMv7 Hardfp Project ARMv7hl optimized build of Mer    <path repository="Core_armv7hl" project="fakeobs:Core:armv7hl"/> armv8el

13) Finally, we'll import all the packages into the OBS.    You should be able to repeat the step below for other architectures,     replacing i586 with armv7hl for example.

osc ls fakeobs:Core:i586 | xargs -L1 -Ixxx osc linkpac -C copy fakeobs:Core:i586 xxx home:USERNAME:i586

This will give your OBS VM something to chew on as it setups local builds of all of the i586 pacakges. This will take some time. You can keep an eye on things by pointing your browser to http://obsvm.labs.yourcompany.com/monitor For details about packages themselves, you can go to http://obsvm.labs.yourcompany.com/home/list_my and click on the monitor icon beside the home:USERNAME project.