Keystone mid-cycle recap for Liberty
Steve Martinelli
by Steve Martinelli

Tags

  • software
  • openstack

Originally posted on https://developer.ibm.com/opentech/2015/07/23/keystone-mid-cycle-recap-for-liberty/

The Keystone mid-cycle for the Liberty release took place at Boston University (BU), and hosted by folks working on the Massachusetts Open Cloud (MOC). It was our most well attended mid-cycle, but still remained one of our most productive. Kudos to the folks at BU and the MOC for hosting us, and a special thanks goes out to Adam Young and Piyanai Saowarattitada for organizing most of the logistics.

By the numbers

23 Keystone community members in attended, including 11 core reviewers. 9 different companies were represented, and from 5 different countries (USA, UK, Russia, Switzerland and Canada). We collaboratively revised and merged over 25 patches across the identity program’s 5 repositories.

Full list of attendees:

Topics of interest:

Dynamic Policy

Policies in OpenStack are currently stored in JSON files (typically named policy.json). Each project serves their own policy.json, and uses oslo.policy (read more about it here) to evaluate the rules in the policy file. The problems: Editing a file is a less-than-elegant UX, and different projects can define rules however they want, this leads to differing rules and policies. The proposed solution: Centralize policies by storing them in SQL. A project would have the ability to fetch a policy, and get a newer one if needed, from Keystone. Samuel de Medeiros Queiroz (from UFCG) gave a fantastic demo of the solution over a video call. My concerns: I’m not seeing enough desire from operators for this feature. The operators we had at the mid-cycle have simply learned to live with the issues. They use Ansible (or other automation tools) to update the policy files for each project on their hosts. Additionally, I think better examples and enforcement on how policy files are written would solve the issue of clashing rules. Overall, I’m hesitant to pick up a bunch of new code to solve an issue that folks have worked around. Adam gave a presentation on the problem and proposed solution, it’s posted on his github account.

keystoneauth

Morgan Fainberg gave an update on keystoneauth, an initiative to further break apart python-keystoneclient. Once upon a time, python-keystoneclient was actually four projects in one, most folks just didn’t know it at the time. I’ll explain. python-keystoneclient was providing:

  • a middleware for services to include in their pipeline
  • tools to create an authenticated session
  • CRUD support for python bindings for our APIs
  • a command line interface

When fully broken apart, the Identity team will offer the same features, just through different libraries:

Functional Tests

I spent Thursday afternoon and most of Friday in a small working group. Together I hacked away on functional tests with Marek Denis, Roxanna Gherle, David Stanek and Anita Kuno. It has become increasingly clear that we need functional tests in Keystone, what was once an afterthought to most of us, is now a prime concern. We outlined 6 configurations that we need to start testing against:

  1. Our current CI/CD setup: SQL Identity, SQL Assignment, UUID Tokens
  2. Single LDAP for Identity: LDAP Identity, SQL Assignment, UUID and Fernet Token
  3. Multiple Identity Backends: SQL+LDAP Identity, SQL Assignment, UUID and Fernet Tokens
  4. Federating Identities: Federated Users + SQL Identity (service accounts), SQL Assignment, UUID and Fernet Token
  5. Keystone to Keystone: Any two of the above, with one setup as an IdP, the other as an SP.
  6. Notifications: Can reuse the current CI/CD, but requires a messaging service and listener to be setup.

Bonus Points

The Massachusetts Open Cloud gave a great live demo of their multi-federation setup. In their use case, users are coming in from a federated identity provider, and may use different service providers for different OpenStack services. For instance, they may use Service Provider A for compute resources (nova), but then use Service Provider B for volume (cinder). As a long time federation proponent, it was great to see folks using this in a way I didn’t think possible. There were many, many other topics discussed: python 3.4 support, hierarchal multi-tenancy, reseller use cases, fernet tokens for federation, general code cleanup and refactoring, and role assignment improvements. For a full list of the nitty gritty details, look at the etherpad.