Entry points for OpenStack Keystone in the Rackspace Public Cloud.
The existing v2 identity system basically works like this:
Instead of impacting that system, we can build on top of it by deploying keystone as a reverse proxy to the existing v2 identity system:
- Reliability.
- Keystone v3 should never impact the reliability of Rackspace v2 in any way.
- Keystone v3 should regress gracefully when Rackspace v2 has outages, downtime, or inconsistent responses.
- Defcore-compliance.
- Keystone v3 must maintain complete Defcore compliance.
- Operational independence.
- Deploying Keystone v3 should not impact Rackspace v2 at all, or vice versa. You should be able to deploy new versions of either service independently of the other.
- It should be trivial to vertically & horizontally scale as necessary.
- Seamless experience.
- Rackspace v2 is the single source of truth for authentication. Only Keystone v3-specific authorization data may live in Keystone.
- Performance.
- Keystone v3 should add minimal response time overhead to Rackspace v2.
Create a token. This is required for defcore compliance. The returned token must work with existing OpenStack services, which currently use v2 for token validation.
The following authentication claims are expected to be supported by the v3 API:
password
+user_id
password
+user_name
+user_domain_id
password
+user_name
+user_domain_name
token
(This is not yet supported, but is a logical part of our roadmap.)
The following authorization scopes are supported by the v3 API:
- Unscoped, or automatically scoped to a preferred project
- Project-scoped:
project_id
- Project-scoped:
project_name
+project_domain_id
- Project-scoped:
project_name
+project_domain_name
- Domain-scoped:
domain_id
- Domain-scoped:
domain_name
- Validate a token. This is not yet supported, but is a logical part of our roadmap. Rackspace public cloud should use v3 internally, and the first step is to allow them to validate user tokens against Keystone.
- User account management. This is not yet supported, but is a logical part of our roadmap. Users should be able to update their own password and change their username with the same user experience as the v2 API.
- Delegation. This is not yet supported, but is a logical part of our roadmap. Users should be able to use v3 to delegate authorization to each other (trusts or otherwise).
v3 | v2 |
---|---|
user ID | user ID |
user name | user name |
user's domain ID | user's tenant ID |
user's domain name | user's tenant name |
project ID | sub-tenant ID |
project name | sub-tenant name |
project's domain ID | super-tenant ID |
project's domain name | super-tenant name |
domain ID | tenant ID |
domain name | tenant name |
- Unit
Tests which are focused on internal methods, functions or classes.
- Functional
Tests that cover user (including super type users) accessible APIs in business level scenarios.
- API
Tests that focus on a single API (as much as possible).
- Performance
Tests that measure throughput and response time for a mixture of calls that approximates production usage, adjusted for the environment under test.
- Integration
Developer tests that emphasize testing multiple systems.
- Model based
Tests that emphasizes an ASM or FSM approach to modeling the system(s) under test.
- Stress
Testing at (progessively) higher levels of load in order to determine breaking points.
- Reliability
Testing to determine mean time between failures.
- System tests
Similar to Dev Integration tests, but tend to be more from a black box perspective.
Unit tests can be run in a local development environment using tox, simply by running tox:
tox
Running tox
without specifying an environment will execute all testing environments, including unit tests, integration tests, and syntax linting.
Integration tests against Capstone and the Rackspace v2.0 Identity API. The test executes the following flow:
These tests require additional information in order to be run successfully. In order to run these tests, the following steps must be done.
First, you must run Capstone somewhere (see the Deployment section below).
Second, two files containing credentials for a Rackspace account must be on the system . The first is ~/.config/openstack/clouds.yaml
:
---
clouds:
rackspace:
profile: rackspace
auth:
domain_id: <domain_id>
project_id: <account_id>
user_id: <user_id>
username: <username>
password: <password>
region_name: <region_id>
keystone:
profile: capstone
The second file is ~/.config/openstack/clouds-public.yaml
:
---
public-clouds:
rackspace:
auth:
auth_url: https://identity.api.rackspacecloud.com/v2.0/
capstone:
auth:
auth_url: http://localhost:5000/v3/
The integration test will use os-cloud-config
to parse these files to build requests to make against both the Rackspace endpoint and the Keystone endpoint. The integration tests can be run through tox
by specifying the integration
environment specifically, or by just running tox
:
tox -e integration
Or any python test runner:
python -m unittest capstone.tests.integration.test_integration
These will be mostly the DefCore tempest tests and other API tests.
We will run the standard Rackspace Identity mix with an additional 10/100 RPS for Capstone. Rackspace Identity has a large amount of repeated calls, which is important since Capstone will cache authentication calls to v2. It is important to reflect that in the mix of users to Capstone authentication.
We will be using model based tests to supplment, where time permits, the integration tests. These tests will focus on switching between authentication tokens issued through Capstone and directly through v2 with other v2 methods. These will have lower priority than other testing.
We do not have a dedicated performance testing environment, so we will not be able to perform stress or reliability testing.
System testing will focus on the following identified risk areas. Likely it will be a combination of Model Based Tests and tests using the system test framework for v2.
- Token compatibility
- Service catalog
- Caching mechanism
- Role Based Access Control
- Keystone specific authentication mechanisms
- Identity specific authentication mechanisms (MFA, Fed.)
- Repose V3 compatibility
For the initial release, we only need to be concerned with v3 tokens used in v2, since v3 will only support non-token related authentication. This area is already covered partially by integration tests. Some additional coverage is needed to do basic checks against a few other v2 APIs. Those can be done as part of Role Based Access Control testing.
A user should be created through the v2 vanilla create user (i.e. without a numeric domain.) and one with the "one create user call", which is the identity create user call, but with some added magic to automatically add support for Cloud Compute and Storage. The service catalog for both user creation calls should be highly similar to the Rackspace v2 create user call.
Capstone uses a caching mechanism. Some testing will need to be done to verify correct behavior for dirty (invalid) caches. Special attention should be made to token revocations, user updates, implicit token revocations (changing a password, enabling multifactor authentication.)
This testing is concerned with different Identity role based access control rules. Testing doesn't need to be exhaustive, but a couple of different test cases around each type of rule should be sufficient.
- user admin in same domain
- user admin in different domain
- non user admin in same domain
- non user admin in different domain
- identity admin
- identity service admin
These are covered under api and integration testing. Some additional basic ad hoc testing should be performed.
MFA authentication should yield a reasonable error message in keystone. Similar to attempting to use v3 federated authentication.
It's not clear yet if this will be in scope for testing.
- V3 token, V2 validate
- V2 token, V3 validate
- V2 token, rescoping that token with V3 token authentication, with variations for scoping to project and domain.
- V2 multifactor authentication token, V3 token auth
- V2 federated token, V3 token auth
- verify auth by is returned correctly.
- expired tokens
- V2 multifactor authentication scoped token in V3 auth
- V2 password reset token in V3 auth
- V3 token used for impersonation in V2
- V2 impersonation token should either be rejected in V3 or the impersonation bits should be ignored (security risk we should test for: you should not be able to get a real V3 token given a V2 impersonation token.)
- V3 authenticate for user with mfa enabled
- V3 auth, V2 revoke
- V2 auth, V3 revoke ( 'revoke' not supported yet.)
- V3 auth, then v2 get role, get tenant should return same results as V2 auth then get role, get tenant, respectively.
Verify V3 methods are included/excluded according to the policy.
- item by item comparison for:
- nast and mosso, this is the Identity "one create user call".
- default region, this is the create user call with a non-numeric domain.
- V3 authenticate, V2 remove mosso tenant, V3 should show updated service catalog.
- V3 authenticate, V2 revoke, V3 token auth should fail.
- V3 authenticate, V2 password update, V3 token should be revoked.
- V3 authenticate, V2 enable mfa, V3 token should be revoked.
- others indirect revokes: disable user, disable domain, maybe pick a couple.
- V3 authenticate, V2 remove tenant, V3 tenant scoped authentication should fail.
- V3 authentication, V2 user disable (token should be revoked).
V3 token scoped to a domain A, can't call add role to user (example) for user in domain B.
Our continuous integration process leverages wercker. With a local docker server and the wercker CLI installed, you can replicate the CI process with:
wercker build
Cache invalidator ~~~~~~~~~~~~~~~~
Cache invalidator invalidates capstone's cache by reading Rackspace Identity events feeds.
Capstone will build a console script to start the process:
capstone-cache-invalidator
Deployment tooling lives in the deploy/
directory and uses ansible.
Prior to deploying capstone, specific upstream dependencies need to be resolved. To resolve these using ansible-galaxy
run the following:
ansible-galaxy install --role-file=ansible-role-requirements.yml
The deploy.yml
playbook will expect an inventory file which will look like:
[keystone_all]
<keystone_endpoint_ip_address>
The playbook will also expect us to provide a capstone.conf
:
[service_admin]
username = <username>
password = <password>
[rackspace]
base_url = <rackspace_api_endpoint>
feed_url = <rackspace_feed_endpoint>
polling_period = <feed_polling_period>
This account is provided by Rackspace. Once the capstone.conf
and inventory
files are in place we're ready to deploy:
ansible-playbook -i inventory deploy.yml
Capstone uses very basic versioning. The following is an example of a capstone build:
capstone-0.1+be7bcf8.tar.gz
The SHA of the build be7bcf8
is appended to be end of the version. A specific version of capstone can be built by using the setup.py
script. Note that is it required to manually create a lightweight 0.1 tag. The tag will only need to be created once, before you build capstone. This will be a manual process until capstone is tagged properly upstream. The git tag
command will tag capstone at it's first commit:
git tag 0.1 7fa2726
git checkout be7bcf8
python setup.py sdist
The resulting build will live under the capstone/dist/
. Note that pip
will only recognize capstone as being version 0.1
, regardless of the commit that was used in the build.
Docker image can be built using build_docker.sh
script. The Docker image is tagged with the git sha revision and latest tag. The created image can then be run using docker-compose. The docker-compose.yml
file exposes the following environment variables: SERVICE_USER_NAME, SERVICE_USER_PASSWORD, RACKSPACE_BASE_URL.
Create the docker image and run it with the following commands:
./build_docker.sh
docker-compose up
Useful links:
The developer workflow mirrors that of OpenStack (refer here if you're looking for additional detail), except that we host our code on github.com/rackerlabs instead of github.com/openstack, and therefore must also use GerritHub instead of review.openstack.org for code reviews. These differences result in the following process:
- You'll first need a GitHub account, and then use that to authenticate with GerritHub (signing in with "DEFAULT" access is sufficient).
- Add your public SSH keys to both your Github settings and GerritHub settings pages.
- Clone the repository:
git clone git@github.com:rackerlabs/capstone.git && cd capstone/
- Setup
git-review
:pip install --upgrade git-review && git review -s
- Create a branch to work from, or go untracked:
git checkout HEAD~0
Create a commit:
git commit
.A
Change-Id
will be appended to your commit message to uniquely identify your code review.Upload it for review:
git review
.You'll get a link to your code review on
gerrithub.io
. A bot will then pull your change, runwercker build
on it to test it, and upload the results back to gerrit, setting theVerified
field to indicate build success or failure. If you have a Docker server available, you can runwercker build
yourself using the wercker CLI.Your patch will be peer reviewed.
If you need to upload a revision of your patch, fetch the latest patchset from gerrit using:
git review -d <change-number>
, where your change number is NOT yourChange-Id
, but instead is a unique number found in the URL of your code review.- When your patch receives a +2 and is passing tests, it will be automatically merged.