This repository contains the definitions for all of the Baserock Project's infrastructure. This includes every service used by the project, except for the mailing lists (hosted by Pepperfish) and the wiki (hosted by Branchable).
Some of these systems are Baserock systems. Other are Ubuntu or Fedora based. Eventually we want to move all of these to being Baserock systems.
The infrastructure is set up in a way that parallels the preferred Baserock approach to deployment. All files necessary for (re)deploying the systems should be contained in this Git repository, with the exception of certain private tokens (which should be simple to inject at deploy time).
When instantiating a machine that will be public, remember to give shell
access everyone on the ops team. This can be done using a post-creation
customisation script that injects all of their SSH keys. The SSH public
keys of the Baserock Operations team are collected in
baserock-ops-team.cloud-config.
.
Ensure SSH password login is disabled in all systems you deploy! See:
https://testbit.eu/is-ssh-insecure/ for why. The Ansible playbook
admin/sshd_config.yaml
can ensure that all systems have password login
disabled.
You can use Ansible to automate tasks on the baserock.org systems.
To run a playbook:
ansible-playbook -i hosts $PLAYBOOK.yaml
To run an ad-hoc command (upgrading, for example):
ansible -i hosts fedora -m command -a 'sudo yum update -y'
ansible -i hosts ubuntu -m command -a 'sudo apt-get update -y'
Fedora security updates can be watched here: https://bodhi.fedoraproject.org/updates/?type=security. Ubuntu issues security advisories here: http://www.ubuntu.com/usn/. The Baserock reference systems doesn't have such a service. The LWN Alerts service gives you info from all major Linux distributions.
If there is a vulnerability discovered in some software we use, we might need to upgrade all of the systems that use that component at baserock.org.
Bear in mind some systems are not accessible except via the frontend-haproxy system. Those are usually less at risk than those that face the web directly. Also bear in mind we use OpenStack security groups to block most ports.
First, you need to update the Baserock reference system definitions with a fixed version of the component. Build that and test that it works. Submit the patch to gerrit.baserock.org, get it reviewed, and merged. Then cherry pick that patch into infrastructure.git.
This a long-winded process. There are shortcuts you can take, although someone still has to complete the process described above at some point.
-
You can modify the infrastructure.git definitions directly and start rebuilding the infrastructure systems right away, to avoid waiting for the Baserock patch review process.
-
You can add the new version of the component as a stratum that sits above everything else in the build graph. For example, to do a 'hot-fix' for GLIBC, add a 'glibc-hotfix' stratum containing the new version to all of the systems you need to upgrade. Rebuilding them will be quick because you just need to build GLIBC, and can reuse the cached artifacts for everything else. The new GLIBC will overwrite the one that is lower down in the build graph in the resulting filesystem. Of course, if the new version of the component is not ABI compatible then this approach will break things. Be careful.
Make sure the Ansible inventory file is up to date, and that you have access to all machines. Run this:
ansible \* -i ./hosts -m ping
You should see lots of this sort of output:
mail | success >> {
"changed": false,
"ping": "pong"
}
frontend-haproxy | success >> {
"changed": false,
"ping": "pong"
}
You may find some host key errors like this:
paste | FAILED => SSH Error: Host key verification failed.
It is sometimes useful to re-run the command using -vvvv, which prints SSH debug output to help diagnose the issue.
If you have a host key problem, that could be because somebody redeployed
the system since the last time you connected to it with SSH, and did not
transfer the SSH host keys from the old system to the new system. Check with
other ops teams members about this. If you are sure the new host keys can
be trusted, you can remove the old ones with ssh-keygen -R 192.168.x.y
, where 192.168.x.y is the internal IP address of the machine. You'll then be prompted to accept the new ones when you run Ansible again.
Once all machines respond to the Ansible 'ping' module, double check that every machine you can see in the OpenStack Horizon dashboard has a corresponding entry in the 'hosts' file, to ensure the next steps operate on all of the machines.
Bear in mind that only the latest 2 versions of Fedora receive security updates. If any machines are not running the latest version of Fedora, you should redeploy them with the latest version. See the instructions below on how to (re)deploy each machine. You should deploy a new instance of a system and test it before terminating the existing instance. Switching over should be a matter of changing either its floating IP address or the IP address in baserock_frontend/haproxy.conf.
You can find out what version of Fedora is in use with this command:
ansible fedora -i hosts -m setup -a 'filter=ansible_distribution_version'
Check what version of a package is in use with this command (using GLIBC as an example). You can compare this against Fedora package changelogs at Koji.
ansible fedora -i hosts -m command -a 'rpm -q glibc --qf "%{VERSION}.%{RELEASE}\n"'
You can see what updates are available using the `dnf updateinfo info' command.
ansible -i hosts fedora -m command -a 'dnf updateinfo info glibc'
You can then use dnf upgrade -y
to install all available updates. Or give the
name of a package to update just that package. Be aware that DNF is quite slow,
and if you forget to pass -y
then it will hang forever waiting for input.
You will then need to restart services. The dnf needs-restarting
command might be
useful, but rebooting the whole machine is probably easiest.
Bear in mind that only the latest and the latest LTS release of Ubuntu receive any security updates.
Find out what version of Ubuntu is in use with this command:
ansible ubuntu -i hosts -m setup -a 'filter=ansible_distribution_version'
Check what version of a given package is in use with this command (using GLIBC as an example).
ansible -i hosts ubuntu -m command -a 'dpkg-query --show libc6'
Check for available updates, and what they contain:
ansible -i hosts ubuntu -m command -a 'apt-cache policy libc6'
ansible -i hosts ubuntu -m command -a 'apt-get changelog libc6' | head -n 20
You can update all the packages with:
ansible -i hosts ubuntu -m command -a 'apt-get upgrade -y' --sudo
You will then need to restart services. Rebooting the machine is probably easiest.
Check what version of a given package is in use with this command (using GLIBC as an example). Ideally Baserock reference systems would have a query tool for this info, but for now we have to look at the JSON metadata file directly.
ansible -i hosts baserock -m command \
-a "grep '\"\(sha1\|repo\|original_ref\)\":' /baserock/glibc-bins.meta"
The default Baserock machine layout uses Btrfs for the root filesystem. Filling up a Btrfs disk results in unpredictable behaviour. Before deploying any system upgrades, check that each machine has enough free disk space to hold an upgrade. Allow for at least 4GB free space, to be safe.
ansible -i hosts baserock -m command -a "df -h /"
A good way to free up space is to remove old system-versions using the
system-version-manager
tool. There may be other things that are
unnecessarily taking up space in the root file system, too.
Ideally, at this point you've prepared a patch for definitions.git to fix
the security issue in the Baserock reference systems, and it has been merged.
In that case, pull from the reference systems into infrastructure.git, using
git pull git://git.baserock.org/baserock/baserock/definitions master
.
If the necessary patch isn't merged in definitions.git, it's still best to merge 'master' from there into infrastructure.git, and then cherry-pick the patch from Gerrit on top.
You then need to build and upgrade the systems one by one. Do this from the 'devel-system' machine in the same OpenStack cloud that hosts the infrastructure. Baserock upgrades currently involve transferring the whole multi-gigabyte system image, so you must have a fast connection to the target.
Each Baserock system has its own deployment instructions. Each should have
a deployment .morph file that you can pass to morph upgrade
. For example,
to deploy an upgrade git.baserock.org:
morph upgrade --local-changes=ignore \
baserock_trove/baserock_trove.morph gbo.VERSION_LABEL=2016-02-19
Once this completes successfully, rebooting the system should bring up the
new system. You may want to check that the new /etc
is correct; you can
do this inside the machine by mounting /dev/vda
and looking in systems/$VERSION_LABEL/run/etc
.
If you want to revert the upgrade, use system-version-manager list
and
system-version-manager set-default <old-version>
to set the previous
version as the default, then reboot. If the system doesn't boot at all,
reboot it while you have the graphical console open in Horizon, and you
should be able to press ESC
fast enough to get the boot menu open. This
will allow booting into previous versions of the system. (You shouldn't
have any problems though since of course we test everything regularly).
Beware of https://storyboard.baserock.org/#!/story/77.
For cache.baserock.org, you can reuse the deployment instructions for git.baserock.org. Try:
morph upgrade --local-changes=ignore \
baserock_trove/baserock_trove.morph \
gbo.update-location=root@cache.baserock.org
gbo.VERSION_LABEL=2016-02-19
The intention is that all of the systems defined here are deployed to an OpenStack cloud. The instructions here harcode some details about the specific tenancy at DataCentred that the Baserock project uses. It should be easy to adapt them for other OpenStack hosts, though.
The instructions below assume you have the following environment variables set according to the OpenStack host you are deploying to:
OS_AUTH_URL
OS_TENANT_NAME
OS_USERNAME
OS_PASSWORD
When using morph deploy
to deploy to OpenStack, you will need to set these
variables, because currently Morph does not honour the standard ones. See:
https://storyboard.baserock.org/#!/story/35.
OPENSTACK_USER=$OS_USERNAME
OPENSTACK_PASSWORD=$OS_PASSWORD
OPENSTACK_TENANT=$OS_TENANT_NAME
The location
field in the deployment .morph file will also need to point to
the correct $OS_AUTH_URL
.
The instructions assume the presence of a set of security groups. You can
create these by running the following Ansible playbook. You'll need the
OpenStack Ansible modules cloned from
https://github.com/openstack-ansible/openstack-ansible-modules/
.
ANSIBLE_LIBRARY=../openstack-ansible-modules ansible-playbook -i hosts \
firewall.yaml
The commands below use a couple of placeholders like $network_id, you can set them in your environment to allow you to copy and paste the commands below as-is.
export fedora_image_id=...
(find this withglance image-list
)export network_id=...
(find this withneutron net-list
)export keyname=...
(find this withnova keypair-list
)
The $fedora_image_id
should reference a Fedora Cloud image. You can import
these from http://www.fedoraproject.org/. At time of writing, these
instructions were tested with Fedora Cloud 23 for x86_64.
Backups of git.baserock.org's data volume are run by and stored on on a Codethink-managed machine named 'access'. They will need to migrate off this system before long. The backups are taken without pausing services or snapshotting the data, so they will not be 100% clean. The current git.baserock.org data volume does not use LVM and cannot be easily snapshotted.
Backups of 'gerrit' and 'database' are handled by the 'baserock_backup/backup.py' script. This currently runs on an instance in Codethink's internal OpenStack cloud.
Instances themselves are not backed up. In the event of a crisis we will redeploy them from the infrastructure.git repository. There should be nothing valuable stored outside of the data volumes that are backed up.
To prepare the infrastructure to run the backup scripts you will need to run the following playbooks:
ansible-playbook -i hosts baserock_frontend/instance-backup-config.yml
ansible-playbook -i hosts baserock_database/instance-backup-config.yml
ansible-playbook -i hosts baserock_gerrit/instance-backup-config.yml
NOTE: to run these playbooks you need to have the public ssh key of the backups
instance in keys/backup.key.pub
.
The front-end provides a reverse proxy, to allow more flexible routing than simply pointing each subdomain to a different instance using separate public IPs. It also provides a starting point for future load-balancing and failover configuration.
To deploy this system:
nova boot frontend-haproxy \
--key-name=$keyname \
--flavor=dc1.1x0 \
--image=$fedora_image_id \
--nic="net-id=$network_id" \
--security-groups default,gerrit,web-server \
--user-data ./baserock-ops-team.cloud-config
ansible-playbook -i hosts baserock_frontend/image-config.yml
ansible-playbook -i hosts baserock_frontend/instance-config.yml
ansible-playbook -i hosts baserock_frontend/instance-backup-config.yml
ansible -i hosts -m service -a 'name=haproxy enabled=true state=started' \
--sudo frontend-haproxy
The baserock_frontend system is stateless.
Full HAProxy 1.5 documentation: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html.
If you want to add a new service to the Baserock Project infrastructure via the frontend, do the following:
- request a subdomain that points at 185.43.218.170 (frontend)
- alter the haproxy.cfg file in the baserock_frontend/ directory in this repo as necessary to proxy requests to the real instance
- run the baserock_frontend/instance-config.yml playbook
- run
ansible -i hosts -m service -a 'name=haproxy enabled=true state=restarted' --sudo frontend-haproxy
OpenStack doesn't provide any kind of internal DNS service, so you must put the fixed IP of each instance.
The internal IP address of this machine is hardcoded in some places (beyond the usual haproxy.cfg file), use 'git grep' to find all of them. You'll need to update all the relevant config files. We really need some internal DNS system to avoid this hassle.
Baserock infrastructure uses a shared MariaDB database. MariaDB was chosen because Storyboard only supports MariaDB.
To deploy this system to production:
nova boot database-mariadb \
--key-name=$keyname \
--flavor dc1.1x1 \
--image=$fedora_image_id \
--nic="net-id=$network_id,v4-fixed-ip=192.168.222.146" \
--security-groups default,database-mysql \
--user-data ./baserock-ops-team.cloud-config
nova volume-create \
--display-name database-volume \
--display-description 'Database volume' \
--volume-type Ceph \
100
nova volume-attach database-mariadb <volume ID> /dev/vdb
ansible-playbook -i hosts baserock_database/image-config.yml
ansible-playbook -i hosts baserock_database/instance-config.yml
ansible-playbook -i hosts baserock_database/instance-backup-config.yml
At this point, if you are restoring from a backup, rsync the data across from your backup server on the instance, then start the mariadb service and you are done.
sudo --preserve-env -- rsync --archive --chown mysql:mysql --hard-links \
--info=progress2 --partial --sparse \
root@backupserver:/srv/backup/database/* /var/lib/mysql
sudo systemctl enable mariadb.service
sudo systemctl start mariadb.service
NOTE: If you see the following message in the journal:
The datadir located at /var/lib/mysql needs to be upgraded using 'mysql_upgrade' tool. This can be done using the following steps
This is because the backup you are importing is from an older version of MariaDB. To fix this, as the message says, you only need to run:
sudo -u mysql mysql_upgrade -u root -p
If you are starting from scratch, you need to prepare the system by adding the required users and databases. Run the following playbook, which can be altered and rerun whenever you need to add more users or databases, or you want to check the database configuration matches what you expect.
ansible -i hosts -m service -a 'name=mariadb enabled=true state=started'
ansible-playbook -i hosts baserock_database/instance-mariadb-config.yml
The internal IP address of this machine is hardcoded in some places (beyond the usual haproxy.cfg file), use 'git grep' to find all of them. You'll need to update all the relevant config files. We really need some internal DNS system to avoid this hassle.
The mail relay is currently a Fedora Cloud 23 image running Exim.
It is configured to only listen on its internal IP. It's not intended to receive mail, or relay mail sent by systems outside the baserock.org cloud.
To deploy it:
nova boot mail \
--key-name $keyname \
--flavor dc1.1x0 \
--image $fedora_image_id \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.145" \
--security-groups default,internal-mail-relay \
--user-data ./baserock-ops-team.cloud-config
ansible-playbook -i hosts baserock_mail/image-config.yml
ansible-playbook -i hosts baserock_mail/instance-config.yml
The mail relay machine is stateless.
The internal IP address of this machine is hardcoded in some places (beyond the usual haproxy.cfg file), use 'git grep' to find all of them. You'll need to update all the relevant config files. We really need some internal DNS system to avoid this hassle.
To deploy this system to production:
vim baserock_openid_provider/baserock_openid_provider/settings.py
Check the DATABASE_HOST IP, and check the other settings against the Django deployment checklist.
nova boot openid.baserock.org \
--key-name $keyname \
--flavor dc1.1x1 \
--image $fedora_image_id \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.144" \
--security-groups default,web-server \
--user-data ./baserock-ops-team.cloud-config
ansible-playbook -i hosts baserock_openid_provider/image-config.yml
ansible-playbook -i hosts baserock_openid_provider/instance-config.yml
The baserock_openid_provider system is stateless.
To change Cherokee configuration, it's usually easiest to use the
cherokee-admin tool in a running instance. SSH in as normal but forward port
9090 to localhost (pass -L9090:localhost:9090
to SSH). Backup the old
/etc/cherokee/cherokee.conf file, then run cherokee-admin
, and log in using
the creditials it gives you. After changing the configuration, please update
the cherokee.conf in infrastructure.git to match the changes cherokee-admin
made.
To deploy to production, run these commands in a Baserock 'devel' or 'build' system.
nova volume-create \
--display-name gerrit-volume \
--display-description 'Gerrit volume' \
--volume-type Ceph \
100
git clone git://git.baserock.org/baserock/baserock/infrastructure.git
cd infrastructure
morph build systems/gerrit-system-x86_64.morph
morph deploy baserock_gerrit/baserock_gerrit.morph
nova boot gerrit.baserock.org \
--key-name $keyname \
--flavor 'dc1.2x4.40' \
--image baserock_gerrit \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.69" \
--security-groups default,gerrit,git-server,web-server \
--user-data baserock-ops-team.cloud-config
nova volume-attach gerrit.baserock.org <volume-id> /dev/vdb
Accept the license and download the latest Java Runtime Environment from http://www.oracle.com/technetwork/java/javase/downloads/server-jre8-downloads-2133154.html
Accept the license and download the latest Java Cryptography Extensions from http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
Save these two files in the baserock_gerrit/ folder. The instance-config.yml Ansible playbook will upload them to the new system.
# Don't copy-paste this! Use the Oracle website instead!
wget --no-cookies --no-check-certificate \
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \
"http://download.oracle.com/otn-pub/java/jdk/8u40-b25/server-jre-8u40-linux-x64.tar.gz"
wget --no-cookies --no-check-certificate \
--header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" \
"http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip"
ansible-playbook -i hosts baserock_gerrit/instance-config.yml
For baserock.org Gerrit you will also need to run:
ansible-playbook -i hosts baserock_gerrit/instance-ca-certificate-config.yml
If you are restoring from a backup, rsync the data across from your backup server on the instance, then start the gerrit service.
systemctl stop gerrit.service
rm -r /srv/gerrit/*
rsync --archive --chown gerrit:gerrit --hard-links \
--info=progress2 --partial --sparse \
root@backupserver:/srv/backup/gerrit/* /srv/gerrit/
systemctl start gerrit.service
NOTE: If you are restoring a backup from an older version of Gerrit, you might need to run some of the following commands to migrate the schemas of the database, and also gerrit data (This was needed to move from 2.9.4 to 2.11.4):
java -jar /opt/gerrit/gerrit-2.11.3.war init -d /srv/gerrit
java -jar /opt/gerrit/gerrit-2.11.3.war reindex -d /srv/gerrit
Gerrit should now be up and running and accessible through the web interface. By default this is on port 8080. Log into the new Gerrit instance with your credentials. Make sure you're the first one to have registered, and you will automatically have been added to the Administrators group.
You can add more users into the Administrators group later on using the gerrit set-members command, or the web interface.
Go to the settings page, 'HTTP Password' and generate a HTTP password for yourself. You'll need it in the next step. The password can take a long time to appear for some reason, or it might not work at all. Click off the page and come back to it and it might suddenly have appeared. I've not investigated why this happens.
Generate the SSH keys you need, if you don't have them.
mkdir -p keys
ssh-keygen -t rsa -b 4096 -C 'lorry@gerrit.baserock.org' -N '' -f keys/lorry-gerrit.key
Now set up the Gerrit access configuration. This Ansible playbook requires a couple of non-standard packages.
git clone git://git.baserock.org/delta/python-packages/pygerrit.git
git clone git://github.com/ssssam/ansible-gerrit
cd ansible-gerrit && make; cd -
export GERRIT_URL=gerrit web URL
export GERRIT_ADMIN_USERNAME=your username
export GERRIT_ADMIN_PASSWORD=your generated HTTP password
export GERRIT_ADMIN_REPO=ssh://you@gerrit:29418/All-Projects.git
ANSIBLE_LIBRARY=./ansible-gerrit PYTHONPATH=./pygerrit \
ansible-playbook baserock_gerrit/gerrit-access-config.yml
Run:
ansible-playbook -i hosts baserock_gerrit/instance-mirroring-config.yml
Now clone the Gerrit's lorry-controller configuration repository, commit the configuration file to it, and push.
# FIXME: we could use the git_commit_and_push Ansible module for this now,
# instead of doing it manually.
git clone ssh://$GERRIT_ADMIN_USERNAME@gerrit.baserock.org:29418/local-config/lorries.git /tmp/lorries
cp baserock_gerrit/lorry-controller.conf /tmp/lorries
cd /tmp/lorries
git checkout -b master
git add .
git commit -m "Add initial Lorry Controller mirroring configuration"
git push origin master
cd -
Now SSH in as 'root' to gerrit.baserock.org, tunnelling the lorry-controller webapp's port to your local machine:
ssh -L 12765:localhost:12765 root@gerrit.baserock.org
Visit http://localhost/1.0/status-html. You should see the lorry-controller status page. Click 'Re-read configuration', if there are any errors in the configuration it'll tell you. If not, it should start mirroring stuff from your Trove.
Create a Gitano account on the Trove you want to push changes to for the Gerrit
user. The instance-config.yml
Ansible playbook will have generated an SSH
key. Run these commands on the Gerrit instance:
ssh git@git.baserock.org user add gerrit "gerrit.baserock.org" gerrit@baserock.org
ssh git@git.baserock.org as gerrit sshkey add main < ~gerrit/.ssh/id_rsa.pub
Add the 'gerrit' user to the necessary -writers groups on the Trove, to allow the gerrit-replication plugin to push merged changes to 'master' in the Trove.
ssh git@git.baserock.org group adduser baserock-writers gerrit
ssh git@git.baserock.org group adduser local-config-writers gerrit
Add the host key of the remote trove, to the Gerrit system:
sudo -u gerrit sh -c 'ssh-keyscan git.baserock.org >> ~gerrit/.ssh/known_hosts'
Check the 'gerrit' user's Trove account is working.
sudo -u gerrit ssh git@git.baserock.org whoami
Now enable the gerrit-replication plugin, check that it's now in the list of plugins, and manually start a replication cycle. You should see log output from the final SSH command showing any errors.
ssh $GERRIT_ADMIN_USERNAME@gerrit.baserock.org -p 29418 gerrit plugin enable replication
ssh $GERRIT_ADMIN_USERNAME@gerrit.baserock.org -p 29418 gerrit plugin ls
ssh $GERRIT_ADMIN_USERNAME@gerrit.baserock.org -p 29418 replication start --all --wait
ansible-galaxy install -r baserock_storyboard/ansible-galaxy-roles.yaml -p ./baserock_storyboard/roles
nova volume-create \
--display-name storyboard-volume \
--display-description 'Storyboard volume' \
--volume-type Ceph \
100
nova boot storyboard.baserock.org \
--key-name $keyname \
--flavor 'dc1.1x1.20' \
--image $ubuntu_image_id \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.131" \
--security-groups default,web-server \
--user-data baserock-ops-team.cloud-config
nova volume-attach storyboard.baserock.org <volume-id> /dev/vdb
ansible-playbook -i hosts baserock_storyboard/instance-config.yml
ansible-playbook -i hosts baserock_storyboard/instance-backup-config.yml
ansible-playbook -i hosts baserock_storyboard/instance-storyboard-config.yml
Mason is the name we use for an automated build and test system used in the Baserock project. The V2 Mason that runs as https://mason-x86-32.baserock.org/ and https://mason-x86-64.baserock.org/ lives in definitions.git, and is thus available in infrastructure.git too by default.
To build mason-x86-64:
git clone git://git.baserock.org/baserock/baserock/infrastructure.git
cd infrastructure
morph build systems/build-system-x86_64.morph
morph deploy baserock_mason_x86_64/mason-x86-64.morph
nova boot mason-x86-64.baserock.org \
--key-name $keyname \
--flavor 'dc1.2x2' \
--image baserock_mason_x86_64 \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.80" \
--security-groups internal-only,mason-x86
--user-data baserock-ops-team.cloud-config
The mason-x86-32 system is the same, just subsitute '64' for '32' in the above commands.
Note that the Masons are NOT in the 'default' security group, they are in 'internal-only'. This is a way of enforcing the policy that the Baserock reference system definitions can only use source code hosted on git.baserock.org, by making it impossible to fetch code from anywhere else.
To deploy to production, run these commands in a Baserock 'devel' or 'build' system.
nova volume-create \
--display-name git.baserock.org-home \
--display-description '/home partition of git.baserock.org' \
--volume-type Ceph \
300
git clone git://git.baserock.org/baserock/baserock/infrastructure.git
cd infrastructure
morph build systems/trove-system-x86_64.morph
morph deploy baserock_trove/baserock_trove.morph
nova boot git.baserock.org \
--key-name $keyname \
--flavor 'dc1.8x16' \
--image baserock_trove \
--nic "net-id=$network_id,v4-fixed-ip=192.168.222.58" \
--security-groups default,git-server,web-server,shared-artifact-cache \
--user-data baserock-ops-team.cloud-config
nova volume-attach git.baserock.org <volume-id> /dev/vdb
# Note, if this floating IP is not available, you will have to change
# the DNS in the DNS provider.
nova add-floating-ip git.baserock.org 185.43.218.183
ansible-playbook -i hosts baserock_trove/instance-config.yml
# Before configuring the Trove you will need to create some ssh
# keys for it. You can also use existing keys.
mkdir private
ssh-keygen -N '' -f private/lorry.key
ssh-keygen -N '' -f private/worker.key
ssh-keygen -N '' -f private/admin.key
# Now you can finish the configuration of the Trove with:
ansible-playbook -i hosts baserock_trove/configure-trove.yml
This is a quick guide on how to create a new repo to hold Baserock project stuff.
The creation of the repo must have been proposed on baserock-dev and had two +1s.
Ideally, don't create a new repo. We don't want development to be split across dozens of different repos, and we don't want Gerrit and the <git.baserock.org/baserock/baserock> to become full of clutter. If you're prototyping something, use a different Git server (Github, for example). But it is sometimes necessary.
-
Create repo on git.baserock.org:
ssh git@git.baserock.org create baserock/baserock/$NAME ssh git@git.baserock.org config baserock/baserock/$NAME \ set project.description "$DESCRIPTION"
The 'lorry-controller' service on gerrit.baserock.org will automatically create the corresponding project in Gerrit whenever it next runs.
-
Add project in Storyboard. First edit
baserock_storyboard/projects.yaml
add the new project to the list, then:scp baserock_storyboard/projects.yaml ubuntu@storyboard.baserock.org: ssh ubuntu@storyboard.baserock.org storyboard-db-manage load_projects projects.yaml
-
Submit a patch for infrastructure.git with your changes, and submit to Gerrit.