IBM Spectrum Scale Object Multi-region Configuration

Disclaimer: The content of this post is not approved nor endorsed by IBM.

Intro:

This is a quick start guide for enabling multi-region support for IBM Spectrum Scale Object. Multiple Object regions provide both failover capability in the event that one region fails, as well as to help alleviate data latency problems that arise when object data is accessed over the WAN.  In this tutorial, we will demonstrate how to configure three different Object regions, where each region represents a different Spectrum Scale (GPFS) cluster. Ideally, each Object region will be located in a different geographical area so that object data requests are handled in their respective regions in order to provide optimal data access performance for the end user or application.

Agenda:

  • IBM Spectrum Scale Resources
  • Test Environment / Prerequisites
  • Enable Object mult-iregion support
  • Validate Object mult-iregion configuration

IBM Spectrum Scale Resources:


Test Environment:

For this tutorial, we will use three existing Spectrum Scale (GPFS) clusters in order to configure three distinct Swift Object regions.  Each cluster will have Spectrum Scale 4.2.0 with Object installed. For steps on installing and configuring IBM Spectrum Scale, you can reference Configuring Object Using IBM Spectrum Scale . You can also reference the installation and deployment guides references in the Resources section above.

The three clusters consist of the following object protocol nodes:

Primary Region:

Node  Daemon node name            IP address       CES IP address list
———————————————————————–
17   c40bbc2xn3.gpfs.net         192.168.40.73    192.168.11.5 192.168.11.7
25   c40bbc3xn7.gpfs.net         192.168.40.91    192.168.11.6 192.168.11.8

Secondary Region:

Node  Daemon node name            IP address       CES IP address list
———————————————————————–
19   c40abc1pn3.gpfs.net         192.168.40.3     192.168.6.12 192.168.6.13

Third Region:

Node  Daemon node name            IP address       CES IP address list
———————————————————————–
4   c80f3m5n15.gpfs.net         192.168.80.155   192.168.6.97 192.168.6.98
6   c80f3m5n17.gpfs.net         192.168.80.157   192.168.6.99


 

Enabling Multiregion support for primary region

To setup a multiregion environment, a primary region/cluster needs to be enabled for multiregion support. Multiregion can be enabled during installation using the installation toolkit. Multiregion can also be enabled when manually configuring object using the mmobj command. Lastly, if object is already installed and configured, then ‘mmobj multiregion enable’ can be executed to enable multiregion for the primary region. Note: only the primary cluster/region can be enabled for multiregion access after the initial installation/configuration. Secondary Spectrum Scale clusters which you wish to add to the mutiregion environment must be installed with multiregion support. Otherwise, if the secondary regions were configured without multiregion support to begin with, then Object will need to be disabled on those clusters and reconfigured with multiregion support. In this tutorial, we will assume that multiregion was not enabled during installation for the primary region/cluster and so we will enable multiregion support as follows:

c40bbc2xn3 is a ces server from the primary region

#00:49:32# c40bbc2xn3:~ # mmobj multiregion enable
mmobj multiregion: Multi-region support is enabled in this cluster.  Region number: 1

If the command succeeds, you should expect a message indicating that multregion support was enabled for Region number 1. You can confirm that multiregion support is properly enabled for the primary region by running ‘mmobj multiregion list’:

#00:49:44# c40bbc2xn3:~ # mmobj multiregion list
Region  Endpoint   Cluster Name         Cluster Id
——  ———  ——————-  ——————–
1       RegionOne  pok_x86.gpfs.net     14008063887730057712

Next, we need to export the primary region’s configuration data:

#17:28:23# c40bbc2xn3:~ # mmobj multiregion export –region-file /tmp/region1.dat
mmobj multiregion: The Object multi-region export file was successfully created: /tmp/region1.dat
mmobj multiregion: Region 1 checksum is: 04217-13711

/tmp/region1.dat will need to be manually copied to region2 and region3. You can use scp or you can copy the file to a network location that is accessible to both regions. Passwordless ssh access across the different regions is not a requirement since the synching process is a manual step that is handled by the user.

#17:33:13# c40bbc2xn3:~ # scp /tmp/region1.dat c40abc1pn3:/tmp/region1.dat
region1.dat                                                                                                                        100%  490KB 490.0KB/s   00:00
#17:33:27# c40bbc2xn3:~ # scp /tmp/region1.dat c80f3m5n15:/tmp/region1.dat
root@c80f3m5n15’s password:
region1.dat

Note: region1.dat only needs to be copied to one of the ces nodes in the secondary regions and not all nodes.

Adding Additional Regions to the Multi-region Environment

Now that multregion support has been successfully enabled in the primary region, we can proceed to adding our two additional regions to the existing multiregion environment. We assume that Object has been installed but as mentioned in the previous section, we need to configure/reconfigure Object with multiregion support for each additional cluster using the primary region’s configuration data. So if Object is already configured, then it needs to be first disabled by running ‘mmces service disable OBJ’

Note: Only a single keystone server is supported in a multiregion environment and so this means the secondary regions must be configured to use an existing, external keystone server using the –remote-keystone-url flag when running the ‘mmobj swift base’ command

Configuring Region 2:

c40abc1pn3 is a ces server from region 2

#18:19:26# c40abc1pn3:/tmp # /usr/lpp/mmfs/bin/mmobj swift base -g /gpfs/objfs  –cluster-hostname cesobjnode –remote-keystone-url ‘http://cesobjnode:35357/v3’ –configure-remote-keystone –join-region-file /tmp/region1.dat –region-number 2 –admin-password Passw0rd
mmobj swift base: Validating execution environment.
mmobj swift base: Creating fileset /dev/objfs object_fileset.
mmobj swift base: Validating Keystone environment.
mmobj swift base: Configuring Swift services.
mmcesobjmr: Multi-region support is enabled in this cluster.  Region number: 2
mmobj swift base: Setting cluster attribute object_singleton_node=192.168.6.12.
mmobj swift base: Uploading configuration changes to the CCR.
mmobj swift base: Configuration complete.

We are now ready to enable the Object service by running ‘mmces service enable OBJ’

Next, we need to export Region 2’s configuration data and sync it with the primary region:

#21:38:40# c40abc1pn3:~ # mmobj multiregion export –region-file /tmp/region2.dat
mmobj multiregion: The Object multi-region export file was successfully created: /tmp/region2.dat
mmobj multiregion: Region 2 checksum is: 03405-63053

#21:51:45# c40abc1pn3:~ # scp /tmp/region2.dat c40bbc2xn3:/tmp/region2.dat
region2.dat

#22:16:05# c40bbc2xn3:/tmp # mmobj multiregion import –region-file /tmp/region2.dat
mmobj multiregion: Importing region checksum 03405-63053.
[I] Building ring file: account.builder
[I] Building ring file: container.builder
[I] Building ring file: object.builder
[I] Building ring file: object-14001511240.builder
[I] The existing ring files are up-to-date and a rebuild is not necessary.
mmobj multiregion: The region config has been updated.
mmobj multiregion: Region 1 checksum is: 48377-15827
mmobj multiregion: The configuration of this region has updated information that must
be propagated to the other regions using the ‘mmobj multiregion’ command.

We should now see two regions in our multi-region environment:

#22:19:59# c40bbc2xn3:/tmp # mmobj multiregion list
Region  Endpoint   Cluster Name        Cluster Id
——  ———  ——————  ——————–
1       RegionOne  pok_x86.gpfs.net    14008063887730057712
2       Region2    pok_ppc64.gpfs.net  6012206590852568800

Now we must export the updated primary region’s configuration data and copy it to region 3:

#22:25:02# c40bbc2xn3:/tmp # mmobj multiregion export –region-file /tmp/region1.dat
mmobj multiregion: The Object multi-region export file was successfully created: /tmp/region1.dat
mmobj multiregion: Region 1 checksum is: 48377-15827

#22:25:22# c40bbc2xn3:/tmp # scp /tmp/region1.dat c80f3m5n15:/tmp/region1.dat
root@c80f3m5n15’s password:
region1.dat

Configuring Region 3:

c80f3m5n15 is a ces server from region 3

#21:45:45# c80f3m5n15:~ # /usr/lpp/mmfs/bin/mmobj swift base -g /gpfs/objfs  –cluster-hostname cesobjnode –remote-keystone-url ‘http://cesobjnode:35357/v3’ –configure-remote-keystone –join-region-file /tmp/region1.dat –region-number 3 –admin-password Passw0rd
mmobj swift base: Validating execution environment.
mmobj swift base: Creating fileset /dev/objfs object_fileset.
mmobj swift base: Validating Keystone environment.
mmobj swift base: Configuring Swift services.
mmcesobjmr: Multi-region support is enabled in this cluster.  Region number: 3
mmobj swift base: Setting cluster attribute object_singleton_node=192.168.6.98.
mmobj swift base: Uploading configuration changes to the CCR.
mmobj swift base: Configuration complete.

We are now ready to enable the Object service by running ‘mmces service enable OBJ’

Next, we need to export Region 3’s configuration data and copy it to the primary region:

#21:56:02# c80f3m5n15:~ # mmobj multiregion export –region-file /tmp/region3.dat
mmobj multiregion: The Object multi-region export file was successfully created: /tmp/region3.dat
mmobj multiregion: Region 3 checksum is: 38710-02214

#21:56:10# c80f3m5n15:~ # scp /tmp/region3.dat c40bbc2xn3:/tmp/region3.dat
region3.dat

At this point, if you run ‘openestack endpoint list’ from the primary region, you should see all three regions listed as shown below:

multiregion_openstack_endpoints

Sync Region 3 with Primary Region:

Since region3 has the most up to date configuration data, we need to synch it back with region1:

#22:57:03# c40bbc2xn3:/tmp # mmobj multiregion import –region-file /tmp/region3.dat
mmobj multiregion: Importing region checksum 04215-46679.
[I] Building ring file: account.builder
[I] Building ring file: container.builder
[I] Building ring file: object.builder
[I] Building ring file: object-14001511240.builder
mmobj multiregion: The region config has been updated.
mmobj multiregion: Region 1 checksum is: 04215-46679

We should now have all three regions in our multi-region environment:

#22:57:32# c40bbc2xn3:/tmp # mmobj multiregion list
Region  Endpoint   Cluster Name        Cluster Id
——  ———  ——————  ——————–
1       RegionOne  pok_x86.gpfs.net    14008063887730057712
2       Region2    pok_ppc64.gpfs.net  6012206590852568800
3       Region3    omnipath.gpfs.net   6788346196758363892

Finally, we need to synch the primary region with region2 so that region2 can become aware of region3:

#23:00:09# c40bbc2xn3:/tmp # mmobj multiregion export –region-file /tmp/region1.dat
mmobj multiregion: The Object multi-region export file was successfully created: /tmp/region1.dat
mmobj multiregion: Region 1 checksum is: 04215-46679

#23:00:21# c40bbc2xn3:/tmp # scp /tmp/region1.dat c40abc1pn3:/tmp/region1.dat
region1.dat

#23:01:35# c40abc1pn3:~ # mmobj multiregion import –region-file /tmp/region1.dat
mmobj multiregion: Importing region checksum 04215-46679.
[I] Building ring file: account.builder
[I] Building ring file: container.builder
[I] Building ring file: object.builder
[I] Building ring file: object-14001511240.builder
mmobj multiregion: The region config has been updated.
mmobj multiregion: Region 2 checksum is: 37509-20319
mmobj multiregion: The configuration of this region has updated information that must
be propagated to the other regions using the ‘mmobj multiregion’ command.

At this point, all three regions should be synched. You should be able to list, create and download containers and objects from any of the regions.

Configuring Spectrum Scale Swift and Keystone with Haproxy

Disclaimer: The content of this post is not approved nor endorsed by IBM.

Intro:

In this blog, we will discuss how to configure Spectrum Scale Swift and Keystone alongside HAProxy. Not only will HAProxy act as our load balancer but it will also allow us to send swift and keystone requests via SSL.

Agenda:

  • Resources
  • Test Environment / Preqrequisites
  • Installing HAProxy
  • Configuring HAProxy
  • Demonstrate Swift/Keystone and HAProxy in action

Resources


Test Environment / Preqrequisites

For this tutorial, we will use the same test environment we used to configure Spectrum Scale Object. (Configuring Object Using IBM Spectrum Scale) To recap, our test environment consists of several nsd server nodes and two ces protocol nodes which will export our object data and which also have swift and openstack clients installed. These nodes have already been configured with Object using the steps outlined in the Object configuration tutorial. Finally, we will have one additional node which will run HAProxy. Note that the HAProxy node does not have to be a Spectrum Scale/GPFS node and simply be any external node.

haproxynodes

HAProxy with SSL: In order to run HAProxy over SSL, we will need either a self-signed or CA signed ssl certificate along with its respective private key in the proper PEM format. The web has many tutorials on generating self-signed ssl certificates including this excellent tutorial which we referenced:

digitalocean.com haproxy tutorial


Installing HAProxy

In order to configure HAProxy with SSL, we need to download HAProxy V1.5x or greater. You can download the source files directly from http://www.haproxy.org

For complete instructions on installing/configuring HAProxy, please visit: http://www.haproxy.org/#docs

or download the rpms:

x86 rpm:
http://rpmfind.net/linux/rpm2html/search.php?query=haproxy%28×86-64%29

ppc64 environments:
http://rpmfind.net/linux/rpm2html/search.php?query=haproxy%28ppc-64%29

As mentioned, for our setup, HAProxy was installed on an external (non-GPFS) node since Spectrum Scale/GPFS is not required to be running on the HAProxy node. In our case, we installed HAProxy on c6f2bc4n3.


Configuring HAProxy

Now that HAProxy has been installed on one of our available nodes, we are ready to configure HAProxy. The configuration process is fairly straight forward since most of the default settings can remain the same.

Our HAProxy node, c6f2bc4n3 has both a physical IP address 192.168.105.145 and a virtual IP address which will be the IP address 192.168.6.100 that HAProxy will listen to requests on. The virtual IP address is also the same ces IP address that will be configured for the keystone and swift endpoints:

haproxy_ifconfig

haproxy_endpoints

  • HAProxy Logging: This is optional but recommended, especially when you are troubleshooting problems with HAProxy and you want to confirm whether the requests are coming in and being handled by the expected HAProxy backend.

Add these lines to /etc/rsyslog.conf:

$ModLoad imudp
$UDPServerRun 514
$UDPServerAddress 127.0.0.1

local5.*                                -/var/log/haproxy.log

Restart the rsyslog service: service ryslog restart

Finally open /var/log/haproxy.cfg and add this line under the global settings:

log         127.0.0.1 local5

  • HAProxy Default Settings: In our setup, we added a few additional parameters to the defaults section within /var/log/haproxy.cfg. This is what our configuration file looked like with respect to the defaults section:

defaults
    mode                    http
    option forwardfor
    option http-server-close
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

  • HAProxy Frontend: For our configuration, we configured four front ends:

One frontend that will handle incoming keystone http requests:

frontend keystone-http
   bind 192.168.6.100:35357
   reqadd X-Forwarded-Proto:\ http
   default_backend keystone-backend

The frontend name can be any name you choose. The bind address is the virtual IP address that will be used by HAProxy to listent to incoming requests. The reqadd X-Forwarded line adds a http header to end of the HTTP request. Finally, the request will be handled by our specified backend ‘keystone-backend’ which will be defined in a later example.

One frontend that will handle incoming keystone https requests:

frontend keystone-https
   bind 192.168.6.100:5000 ssl crt /etc/haproxy/haproxynode.pem
   reqadd X-Forwarded-Proto:\ https
   default_backend keystone-backend

You will notice this extra string ‘ssl crt /etc/haproxy/haproxynode.pem‘ which specifies the ssl certificate path and name that will be used for SSL. Also, a https header will be added to the end of the request this time. Our backend remains the same.

One frontend that will handle incoming Swift http requests:

frontend swift-http
   bind 192.168.6.100:8080
   reqadd X-Forwarded-Proto:\ http
   default_backend swift-backend

We have a separate backend to handle swift requests since swift listens for requests on port 8080.

Finally one frontend that will handle incoming Swift https requests:

frontend swift-https
   bind 192.168.6.100:8081 ssl crt /etc/haproxy/haproxynode.pem
   reqadd X-Forwarded-Proto:\ https
   default_backend swift-backend

  • HAProxy Backend: Two backends were configured to process the requests that  are forwarded by the defined front ends:

One backend that will handle the forwarded keystone http/https requests:

backend keystone-backend
   balance roundrobin
   server k1 192.168.11.5:35357 check inter 5s
server k2 192.168.11.6:35357 check inter 5s
server k3 192.168.11.7:35357 check inter 5s
server k4 192.168.11.8:35357 check inter 5s

We only need one backend to handle both http and https keystone requests. We will use the roundrobin algorithm to decide which backend servers will process the incoming requests. In our example above, we have 4 different keystone backend servers/ces ip addresses that will handle our keystone requests and these are identified as k1-k4. The check inter 5s means that we will check these servers every 5 seconds to ensure they are responsive.

One backend that will handle the forwarded Swift http/https requests:

backend swift-backend
   balance roundrobin
   server s1 192.168.11.5:8080 check inter 5s
server s2 192.168.11.6:8080 check inter 5s
server s3 192.168.11.7:8080 check inter 5s
server s4 192.168.11.8:8080 check inter 5s

Once again, we have 4 swift backend servers/ces ip addresses that will handle our swift requests and these are identified as s1-s4.

Once all of the HAProxy settings are in place, we need to restart the HAProxy service in order for these settings to take effect:

service haproxy restart


Demonstrate Swift/Keystone and HAProxy in action

Now that we have HAProxy configured, we are ready to test our swift and keystone. As mentioned previously, we assume that swift and keystone have been previously configured. If not, please refer to the Configuring Object Using IBM Spectrum Scale blog post for more information.

The first thing we will do is copy over the HAProxy ssl certificate (haproxynode.pem) that we generated on the haproxy node in the earlier steps to one of our protocol nodes. For this example, we will run our tests from node c40bbc2xn3. For the first test, we will try to access the keystone and swift http haproxy backends. To do this, we need to first load our standard openrc file which by default is located under /root:

c40bbc2xn3:~ # source openrc

The standard out of the box openrc file will look something like this:

haproxy_openrc

As mentioned previously, if you don’t want to be prompted to provide the password each time you execute an openstack or swift command, then you can explicitly specify the password in the openrc file.

Let’s list the current users:

openstack user list

| mfsct-iou50   | mfsct-iou50   |
| ksadmin       | ksadmin       |
| ari           | ari           |
| Bill          | bill          |
| swift         | swift         |
| aduser1       | aduser1       |
+—————+—————+

We should see a corresponding entry in the /var/log/haproxy.log on our haproxy server node that looks similar to this:

2015-08-11T22:22:54-04:00 localhost haproxy[7027]: 192.168.40.73:39897 [11/Aug/2015:22:22:54.695] keystone-http keystone-backend/k3 0/0/0/4/4 200 465 – – —- 1/1/0/1/0 0/0 “GET /v3 HTTP/1.1”
2015-08-11T22:22:55-04:00 localhost haproxy[7027]: 192.168.40.73:39897 [11/Aug/2015:22:22:54.699] keystone-http keystone-backend/k4 2/0/1/414/417 200 8177 – – —- 1/1/0/1/0 0/0 “GET /v3/users HTTP/1.1”

Looking at the haproxy log entries we will notice a few things. First, we see the node that submitted the request which in this example was our protocol node whose physical ip address is: 192.168.40.73:39897.

The request was initially handled by the keystone-http front end and then processed by the keystone-backend. Server K3 which is mapped to ces ip 192.168.11.7:35357 was selected in a round robin fashion as the server to handle the request.

Finally, “GET /v3/users HTTP/1.1” indicates that a GET request to list the current users via HTTP was processed.

Let’s now show what a simple swift command will look like:

swift stat

Account: AUTH_cd3ce00824f44f63ae9ec532b5d964fa
     Containers: 0
        Objects: 0
          Bytes: 0
X-Put-Timestamp: 1439415671.02249
    X-Timestamp: 1439415671.02249
     X-Trans-Id: tx1027924823ff4884bf17c-0055cbbd75
   Content-Type: text/plain; charset=utf-8

/var/log/haproxy.log from proxy node:

2015-08-11T22:24:37-04:00 localhost haproxy[7027]: 192.168.40.73:54720 [11/Aug/2015:22:24:36.184] swift-http swift-backend/s2 0/0/0/1491/1491 204 339 – – —- 1/1/0/1/0 0/0 “HEAD /v1/AUTH_cd3ce00824f44f63ae9ec532b5d964fa HTTP/1.1”

Similar to the openstack command, we see that the same ces node submitted the swift command. We also see that the swift-http front end initially processed the request and then forwarded it to the swift-backend. The s2 server actually handled the request.

Let’s now show how we can connect to our haproxy server using ssl. The easiest way to do this is to modify our existing openrc file or create a new openrc file. The modified openrc file will look similar to this:

haproxyssl_openrc

The OS_AUTH_URL value above specifies that we will connect to our proxynode using https over port 5000. This is the same port we specified in our haproxy.cfg for our https backend.

The OS_CACERT value specifies the location of our haproxy cert file that we copied over from our haproxy node.

Assuming we modified the existing openrc file, we need to reload the file:

c40bbc2xn3:~ # source openrc

Now we will run the following openstack command:

openstack project list
+———————————-+———+
| ID                               | Name    |
+———————————-+———+
| cd3ce00824f44f63ae9ec532b5d964fa | admin   |
| 41dfc8a0cbac414685c53c5543fa71fd | service |
+———————————-+———+

We should expect to see something similar to this in the haproxy log:

haproxy_ssl_log

Notice how this time, the request was handled by the keystone-https front end and then forwarded to the keystone-http backend which is working as expected.

Finally, let’s look at our swift command:
swift –os-storage-url https://haproxynode:8081/v1/AUTH_cd3ce00824f44f63ae9ec532b5d964fa stat

swift_ssl

haproxy_ssl_swift_log
You should notice one request to authenticate against keystone and another request to process the swift command.

Configuring Object Using IBM Spectrum Scale

Disclaimer: The content of this post is not approved nor endorsed by IBM.

Intro:

This is a quick start guide for configuring object using IBM Spectrum Scale. We will focus on configuring object with AD and LDAP. This guide is not intended as a substitute for the official Spectrum Scale installation guide but it should highlight some of the main steps involved in configuring Object access using an existing Spectrum Scale/GPFS cluster.

Agenda:

  • IBM Spectrum Scale Resources
  • Test Environment / Preqrequisites
  • Configuring Object using the Spectrum Scale Installer
  • Validate Object Authentication Configuration
  • Demonstrating swift and openstack basic commands
  • Configuring Object Authentication using the command line

IBM Spectrum Scale Resources:


Test Environment:

For this tutorial, we will be using an existing Spectrum Scale/GPFS cluster running version 4.1.1. The test cluster consists of 3 nsd server nodes (nodes 1-3) and 2 ces nodes (nodes 17, 18) which will export our object data:

mmlscluster

We also have to existing file systems, cesfs which will store the necessary configuration data and testfs which will store our actual object data:

fsFinally, we have allocated a few ces IP address which will be used to host the various object services such as the postgres and keystone services:

ces_ips


Configuring Object Using the Spectrum Scale Installer:

For the actual installation of object, we will be using the Spectrum Scale Installer which you can download through IBM. Once you have downloaded the installer, you need to copy it to one of your cluster nodes which will serve as the installation node. This node should have passwordless ssh access to the other nodes in the cluster since it will run the chef server that is required for the installer.

  1. Extract necessary installation files and configure yum repos by executing the installer script:extract
  2. Setup the installation node:step2_installnode
  3. Add the first ces node and it’s respective export IP addresses:step3_addcesnode
  4. Add the second ces node and it’s respective export IP address:step4_addcesnode
  5. Designate our first ces node as the admin node:step5_addadminnode
  6. Set file system and mount point:step6_setfs
  7. Enable Object protocol:step7_enableobj
  8. Set Keystone endpoint hostname:step8_setendpoint
  9. Set the file system, mount point and object base fileset to be used (note: the file set must not already exist as this step will create it):step9_setobjbase
  10. Configure Object authentication: If you want to configure Object with AD or LDAP authentication using the installer, then this step is necessary. If you want to configure object with local database authentication, then this step is not necessary since by default, object is configured with local database authentication (authentication information is stored in a postgres database).There are 4 authentication options to select from: (ldap|ad|local|external) For the purposes of this tutorial, we will only cover AD and LDAP which require essentially the same exact steps in order to configure. Note, you can only configure AD or LDAP for object authentication.

Object with AD: spectrumscale auth object ad’ After executing this command, you will be prompted to open the authentication configuration file which will allow you to configure the necessary AD related configuration settings:step10_objwithadstep10_objwithad_config Object with AD + TLS: If you would like to configure object with AD + TLS, then you need to specify a couple of additional parameters inside the authentication configuration file. A copy of the TLS certificate must be copied locally to the install node. In this example, the TLS certificate has been copied to: /var/mmfs/tmp/object_ldap_cacert.pem which is where it would need to be copied if you were to configure object authentication using the command line:step10_objwithadTLS_configSave changes:step10_objwithadsave_configObject with LDAP / LDAP + TLS: spectrumscale auth object ldap’ After executing this command, you will be prompted to open the authentication configuration file which will allow you to configure the necessary LDAP related configuration settings. The authentication configuration file will appear exactly the same as the the configuration file shown previously for AD configuration.

11.  Confirm the current installation configuration:step10_nodelist12.  Run Installation Precheck:step11_precheck13. Run Installation:step13_deployPost Installation: As an optional step, you can run an installation post check to ensure the installation went as planned: ‘spectrumscale deploy –postcheck’ Ideally, the installation should succeed with some occasional warnings. However, in some cases, the installation may fail due to incorrect configuration settings or environmental issues. In the event of an installation failure, consult the log file for more details. Once the underlying problems have been addressed, you should be able to rerun the installation.


Object Authentication Validation:

Assuming the installation succeeded, we are now ready to validate our object configuration.

Let’s start by checking the cluster configuration settings which should confirm whether object was properly configured during the installation. You should expect to see that OBJ is enabled and the CES IP addresses have been assigned to the respective CES nodes: ‘mmlscluster –ces’

clustercesinfo

Next, let’s run ‘mmuserauth service list –data-access-method object’ to list the existing authentication configuration. The authentication type you configured in the installation step should be listed:authlistlocalFinally, let’s run ‘mmces service list –all’ , to ensure that Object is running on our two ces nodes:servicelist


Validating Object Authentication Using swift and openstack clients:

Now that we have confirmed object was installed and configured properly, we can move on to the fun stuff. By default, both the swift and openstack clients are installed on the ces nodes during the installation process. In general, the swift client as the name suggests is used to interact with the swift service to perform operations such as adding, removing and reading objects from the swift repository. The openstack client is mainly used to interact with the keystone service whose primary function is to provide identity and authentication capability for the ces cluster.

openrc file: Before we being to explore some of the swift and openstack commands, we need to familiarize ourselves with the openrc file that was created during the installation.The openrc file contains all of the relevant swift/keystone environment variables that will be used to interact with both swift and keystone. You load the file the same way you would load your typical environment profile such as .profile, etc. If you don’t load the environment variables contained in the openrc file, then you will need to explicitly specify each of these parameters in the command line along with their respective values. A typical openrc file would look something like this:openrcOne thing to notice is that the OS_PASSWORD parameter is blank by default and so you will be prompted to enter a password whenever you execute a swift or openstack command. This is done for security reasons. If you don’t want to be prompted each time to enter a password and you understand the security risks, you can specify the password in the openrc file and then reload it.

For our example, we have Object configured with AD as the authentication backend and so whenever we execute a swift or openstack command, the swift or openstack user will authenticate against AD in order for the request to be processed.

Openstack:

One thing to note is that the keystone AD/LDAP interface is read only and so you cannot create new AD/LDAP users via openstack/keystone.

Let’s start by listing the current keystone endpoints: ‘openstack endpoint list –os-password passwordendpointlistNext, let’s list the current projects. By default, two projects are created during setup: admin and service: ‘openstack project list –os-password password’projectlistLet’s list the current users in AD: ‘openstack user list –os-password password’

userlist

Show the currently defined roles: ‘openstack role list –os-password password’

intial_rolelist

Now the fun begins. Let’s create a new role called ‘test-role’: ‘openstack role create test-role –os-password password’newrole

Show basic information about the new role using the show command. This time we won’t specify the –os-password parameter and so we should instead be prompted for the password: ‘openstack role show test-role’

roleshow

Let’s now create a new project called “test-project”: ‘openstack project create –description “my test project” test-project –os-password password’

newproject

Finally, let’s assign our newly created ‘test-role’ to one of our existing AD users and then confirm that the role is assigned to our test user. Notice that we need to specify a project ID and so in this case, we will use the test-project ID: ‘openstack role add –user aduser1 –project 1c1aeace6e9745568f462b889933e8a3 test-role’

roleassign

Swift:

Let’s begin by introducing the ‘stat’ command which will display basic information about swift. Initially, there will be no containers nor any objects stored in swift: ‘swift stat –os-password password’

swift_initial_stat

Let’s create a new container called “test-container”: ‘swift post “test-container” –os-password password’

newcontainer

Let’s now upload two new objects into our “test-container”. In this example, the objects were stored locally on our ces node: ‘swift upload “test-container” object1 object2 –os-password password’

newobject

Finally, let’s download ‘object1’ from our “test-container”: ‘swift download “test-container” object1 –os-password password’

download_obj



Configuring Object Authentication Via the Command Line:

In addition to configuring Object authentication using the installer toolkit, you can also configure object authentication using the ‘mmuserauth service create’ command directly from the command line. For more detailed information about the mmuserauth command, please reference IBM Spectrum Scale Advanced Admin Guide. There are different valid scenarios that may warrant the need to configure object authentication using the command line:

  • Object was installed/configured manually (not using the install toolkit) and so authentication needs to be configured manually.
  • Object was installed but Object authentication was not configured using the install toolkit
  • Object was installed and configured using the install toolkit but requirements have since changed or a mistake was made during the configuration process

There may be additional valid scenarios but these three listed here should be sufficient for now. The first scenario will be handled separately because it requires a couple of additional prerequisite steps before we are able to configure Object authentication from the command line. The last two scenarios will require the same steps so we will handle these two scenarios together.

  • Scenario 1 (Object was not installed/configured using the install toolkit)

Assuming that Object was not originally installed/configured using the install toolkit, then we need to run one additional prerequisite command (mmcesobjcrbase) before we can configure Object authentication using AD or LDAP. We still assume however that the required Object RPMs have been installed. Once the prerequisite step has been completed we need to manually enable Object. After that, we will jump down to the steps outlined for Scenario 2 and 3 to configure Object with AD or LDAP.

The mmcesobjcrbase command will run all the necessary steps that will be required to properly configure Object. It will essentially execute the same steps that are executed when Object is enabled using the install toolkit. Once the command successfully completes, Object with Local (database) authentication will be configured. mmcesobjcrbase will fail if Object is already enabled or if Object Authentication is already configured. Object authentication must be removed and the Object service must be disabled prior to running the command.

At a minimum, we need to specify values for the following parameters:

-g GPFSMountPoint (this is where the object fileset store the object configuration files will be created)

–cluster-hostname (this is the same as the –ks-dns-name which is the keystone endpoint name)

–local-keystone|–remote-keystone-url (If the keystone server is running on the local node, then you should specify –local-keystone. Otherwise, you need to specify –remote-keystone-url along with the full url path to the remote keystone server)

–admin-password (password for the keystone admin user)

–admin-user defaults to ‘admin’ if not explicitly specified
–swift-user defaults to ‘swift’ if not explicitly specified
-O (ObjFileset name to be created) defaults to ‘object_fileset’

mmcesobjcrbase -g /gpfs/cesfs –cluster-hostname cesobjnode –local-keystone –admin-user admin –admin-password Passw0rd

mmcesobjcrbase: Validating execution environment.
mmcesobjcrbase: Performing SELinux configuration.
mmcesobjcrbase: Creating fileset /dev/cesfs object_fileset.
mmcesobjcrbase: Configuring Keystone server in /gpfs/cesfs/ces/object/keystone.
mmcesobjcrbase: Creating postgres database.
mmcesobjcrbase: Setting cluster attribute object_database_node=192.168.11.5.
2015-08-10 17:05:06.787 12181 WARNING keystone.cli [-] keystone-manage pki_setup is not recommended for production use.
mmcesobjcrbase: Validating Keystone environment.
mmcesobjcrbase: Validating Swift values in Keystone.
mmcesobjcrbase: Configuring Swift services.
mmcesobjcrbase: Setting cluster attribute object_singleton_node=192.168.11.7.
mmcesobjcrbase: Uploading configuration changes to the CCR.
mmcesobjcrbase: Configuration complete.

 Now we need to enable the OBJ service:

mmces service enable OBJ
c40bbc3xn7.gpfs.net:  object: service is disabled
c40bbc2xn3.gpfs.net:  object: service is disabled
mmchconfig: Command successfully completed
mmchconfig: Propagating the cluster configuration data to all
  affected nodes.  This is an asynchronous process.

Object should now be enabled and configured with local (database) authentication:

mmuserauth service list –data-access-method object
OBJECT access configuration : LOCAL
PARAMETERS               VALUES
————————————————-
ENABLE_KS_SSL            false
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            admin

So at this point, Object has been successfully configured. If we need to reconfigure Object with AD or LDAP authentication, then we can follow the steps outlined below:

  • Scenario 2 and 3 (Object was installed/configured using the install toolkit):

When using the install toolkit, if you enable Object, then Object is configured with local (database) authentication by default if you do not specify any other authentication method. With local authentication, you need to manually add and set object users and passwords in the local postgres database using openstack commands. If you realize later however that you wish to integrate object with your existing AD or LDAP environment, then you will need to reconfigure Object authentication to achieve this:

Let’s start by listing the current Object authentication configuration using the mmuserauth service list command:

mmuserauth service list –data-access-method object
OBJECT access configuration : LOCAL
PARAMETERS               VALUES
————————————————-
ENABLE_KS_SSL            false
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            none

Next, we will now remove the existing Object authentication configuration using the mmuserauth service remove command:

mmuserauth service remove –data-access-method object
mmuserauth service remove: Command successfully completed

We need to run the command one more time to remove the ID Mapping:

mmuserauth service remove –data-access-method object –idmapdelete
mmuserauth service remove: Command successfully completed

We can confirm Object with Local authentication was successfully removed:

mmuserauth service list –data-access-method object
OBJECT access not configured
PARAMETERS               VALUES
————————————————-


Configuring Object authentication with with keystone https (SSL):

Before we can configure object with keystone https, we need to either have self-signed or CA-signed SSl certificates. For the purposes of this example, we will create self-signed SSL certificates using the keystone-manage utility.

We will begin by adding/modifying the following section within the keystone.conf file (/etc/keystone/keystone.conf)

[eventlet_server_ssl]
enable = false (this will be toggled to true once mmuserauth commands is executed)
certfile = /etc/keystone/ssl/certs/ssl_cert.pem
keyfile = /etc/keystone/ssl/private/ssl_key.pem
ca_certs = /etc/keystone/ssl/certs/ssl_cacert.pem
cert_subject = /C=US/ST=Unset/L=Unset/O=Unset/CN=cesobjnodetest (this name has to match the endpoint name you specify)

Save and exit

Run keystone-manage ssl_setup –keystone-user keystone –keystone-group keystone –rebuild

You will get warnings which you can ignore for now. You should now have these keys in this location:

/etc/keystone/ssl/certs # ls -l
rw-r—–. 1 keystone keystone 1277 Aug 10 17:05 signing_cacert.pem
-rw——-. 1 keystone keystone 4251 Aug 10 17:05 signing_cert.pem
-rw-r—–. 1 keystone keystone  920 Aug 31 10:38 ssl_cacert.pem
-rw-r—–. 1 keystone keystone 2864 Aug 31 10:38 ssl_cert.pem

/etc/keystone/ssl/private # ls -l
-rw-r—–. 1 keystone keystone  887 Aug 31 10:38 cakey.pem
-rw——-. 1 keystone keystone 1675 Aug 10 17:05 signing_key.pem
-rw-r—–. 1 keystone keystone  887 Aug 31 10:38 ssl_key.pem

Let’s verify the ssl cerificate to ensure the name is valid:

/etc/keystone/ssl/certs # openssl verify ssl_cacert.pem
ssl_cacert.pem: C = US, ST = Unset, L = Unset, O = Unset, CN = cesobjnodetest
error 18 at 0 depth lookup:self signed certificate
OK

Now copy these files to /var/mmfs/tmp location:

#10:46:00# c40bbc2xn3:/var/mmfs/tmp # ls | grep ssl
ssl_cacert.pem
ssl_cert.pem
ssl_key.pem

We are now ready to configure Object Authentication with Keystone https (SSL). We can configure object with local (database) authentication or Object with AD or LDAP. For this example, we will configure Object with local authentication but the same steps can be followed to configure AD or LDAP:

 mmuserauth service create –data-access-method object –type local –ks-dns-name cesobjnode –enable-ks-ssl –ks-admin-user admin –ks-admin-pwd Password –ks-swift-user swift –ks-swift-pwd Password

mmcesobjcrbase: Validating execution environment.
mmcesobjcrbase: Configuring Keystone server in /gpfs/cesfs/ces/object/keystone.
mmcesobjcrbase: Initiating action (start) on postgres in the cluster.
2015-09-08 19:03:15.772 38404 WARNING keystone.cli [-] keystone-manage pki_setup is not recommended for production use.
mmcesobjcrbase: Validating Keystone environment.
mmcesobjcrbase: Validating Swift values in Keystone.
mmcesobjcrbase: Configuration complete.
Object configuration with local database as the identity backend has completed successfully.
Object authentication configuration completed successfully.

Let’s verify the configuration by running mmuserauth service list:

FILE access not configured
PARAMETERS               VALUES
————————————————-

OBJECT access configuration : LOCAL
PARAMETERS               VALUES
————————————————-
ENABLE_KS_SSL            true
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            admin

We should notice ENABLE_KS_SSL is set to true.

Finally, our openrc file should have a new variable named OS_CACERT which points to our ssl certificate:

export OS_CACERT=”/etc/keystone/ssl/certs/ssl_cacert.pem”
export OS_AUTH_URL=”https://cesobjnode:35357/v3″
export OS_IDENTITY_API_VERSION=3
export OS_AUTH_VERSION=3
export OS_USERNAME=”admin”
export OS_PASSWORD=”Password”
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default

After we load our new openrc file, we should be able to execute openstack swift commands as expected.


Configuring Object with LDAP authentication:

If Object authentication was previously configured, then we must first remove the existing configuration before we can reconfigure authentication to use LDAP:

mmuserauth service remove –data-access-method object
mmuserauth service remove: Command successfully completed

mmuserauth service remove –data-access-method object –idmapdelete
mmuserauth service remove: Command successfully completed

We are now ready to configure Object Authentication with LDAP:

mmuserauth service create –type ldap –data-access-method object –user-name “cn=manager,dc=essldapdomain” –password “Passw0rd” –base-dn dc=isst,dc=aus,dc=stglabs,dc=ibm,dc=com –ks-dns-name 192.168.6.99 –ks-admin-user mamdouh –servers 9.3.101.55 –user-dn “ou=People,dc=essldapdomain” –ks-swift-user swift –ks-swift-pwd Passw0rd

Note: Both the –ks-admin-user and the –ks-swift-user specified in the command must already exist in LDAP.

mmcesobjcrbase: Validating execution environment.
mmcesobjcrbase: Performing SELinux configuration.
mmcesobjcrbase: Configuring Keystone server in /gpfs/cesfs/ces/object/keystone.
mmcesobjcrbase: Initiating action (start) on postgres in the cluster.
mmcesobjcrbase: Validating Keystone environment.
mmcesobjcrbase: Validating Swift values in Keystone.
mmcesobjcrbase: Configuration complete.
Object configuration with LDAP as the identity backend has completed successfully.
Object authentication configuration completed successfully.

We can now verify that Object with LDAP authentication was properly configured using the mmuserauth service list command:

mmuserauth service list –data-access-method object
OBJECT access configuration : LDAP
PARAMETERS               VALUES
————————————————-
ENABLE_ANONYMOUS_BIND    false
ENABLE_SERVER_TLS        false
ENABLE_KS_SSL            false
USER_NAME                cn=manager,dc=essldapdomain
SERVERS                  9.3.101.55
BASE_DN                  dc=isst,dc=aus,dc=stglabs,dc=ibm,dc=com
USER_DN                  ou=people,dc=essldapdomain
USER_OBJECTCLASS         posixAccount
USER_NAME_ATTRIB         cn
USER_ID_ATTRIB           uid
USER_MAIL_ATTRIB         mail
USER_FILTER              none
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            mamdouh

Refer to the ‘Validating Object Authentication Using swift and openstack clients‘ section in order to validate whether Object Authentication is working as expected.


Object with AD Authentication:

Assuming that we configured Object with LDAP in the previous step, before we can configure Object with AD, we must remove the existing Object authentication configuration, including the ID Mapping.

Once again, let’s first remove the existing Object authentication configuration using the mmuserauth service remove command:

mmuserauth service remove –data-access-method object
mmuserauth service remove: Command successfully completed

Next, we will remove the existing ID Mapping:

mmuserauth service remove –data-access-method object –idmapdelete
mmuserauth service remove: Command successfully completed

Finally, let’s validate the removal using the mmuserauth service list command:

mmuserauth service list –data-access-method object
OBJECT access not configured
PARAMETERS               VALUES
————————————————-

We’re now ready to configure Object with AD:

 mmuserauth service create –type ad –data-access-method object –user-name “cn=Administrator,cn=Users,dc=adcons,dc=spectrum” –password “Passw0rd3” –base-dn “dc=adcons,dc=spectrum” –ks-dns-name 192.168.6.99 –ks-admin-user Administrator –ks-swift-user swift –ks-swift-pwd Passw0rd2 –servers 9.18.76.50 –user-id-attrib cn –user-name-attrib sAMAccountName –user-objectclass organizationalPerson –user-dn “cn=Users,dc=adcons,dc=spectrum”

Note: Both the –ks-admin-user and the –ks-swift-user specified in the command must already exist in AD.

mmcesobjcrbase: Validating execution environment.
mmcesobjcrbase: Performing SELinux configuration.
mmcesobjcrbase: Configuring Keystone server in /gpfs/cesfs/ces/object/keystone.
mmcesobjcrbase: Initiating action (start) on postgres in the cluster.
mmcesobjcrbase: Validating Keystone environment.
mmcesobjcrbase: Validating Swift values in Keystone.
mmcesobjcrbase: Configuration complete.
Object configuration with LDAP (Active Directory) as the identity backend has completed successfully.
Object authentication configuration completed successfully.

Let’s verify that Object with AD as the authentication backend has been configured as expected:

mmuserauth service list –data-access-method object
OBJECT access configuration : AD
PARAMETERS               VALUES
————————————————-
ENABLE_ANONYMOUS_BIND    false
ENABLE_SERVER_TLS        false
ENABLE_KS_SSL            false
USER_NAME                cn=Administrator,cn=Users,dc=adcons,dc=spectrum
SERVERS                  9.18.76.50
BASE_DN                  dc=adcons,dc=spectrum
USER_DN                  cn=users,dc=adcons,dc=spectrum
USER_OBJECTCLASS         organizationalPerson
USER_NAME_ATTRIB         sAMAccountName
USER_ID_ATTRIB           cn
USER_MAIL_ATTRIB         mail
USER_FILTER              none
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            Administrator


Configuring Object with AD-TLS (The same exact steps can be followed to configure LDAP-TLS)

In order to configure Object with AD-TLS, we need to copy the TLS certificate to the local ces node. The TLS certificate should be named object_ldap_cacert.pem and copied to /var/mmfs/tmp

If Object authentication was previously configured, we need to remove the existing authentication first before we can proceed:

mmuserauth service remove –data-access-method object
mmuserauth service remove: Command successfully completed

Next, we will remove the existing ID Mapping:

mmuserauth service remove –data-access-method object –idmapdelete
mmuserauth service remove: Command successfully completed

Finally, let’s validate the removal using the mmuserauth service list command:

mmuserauth service list –data-access-method object
OBJECT access not configured
PARAMETERS               VALUES
————————————————-

We’re now ready to configure Object with AD-TLS:

–enable-server-tls needs to be specified in order to configure server TLS

mmuserauth service create –type ad –data-access-method object –user-name “cn=Administrator,cn=Users,dc=adcons,dc=spectrum” –password “Passw0rd3” –base-dn “dc=adcons,dc=spectrum” –ks-dns-name cesobjnode –ks-admin-user Administrator –ks-swift-user swift –ks-swift-pwd Passw0rd2 –servers AD-CONS.adcons.spectrum –user-id-attrib cn –user-name-attrib sAMAccountName –user-objectclass organizationalPerson –user-dn “cn=Users,dc=adcons,dc=spectrum” –enable-server-tls

mmcesobjcrbase: Validating execution environment.
mmcesobjcrbase: Performing SELinux configuration.
mmcesobjcrbase: Configuring Keystone server in /gpfs/cesfs/ces/object/keystone.
mmcesobjcrbase: Initiating action (start) on postgres in the cluster.
mmcesobjcrbase: Validating Keystone environment.
mmcesobjcrbase: Validating Swift values in Keystone.
mmcesobjcrbase: Configuration complete.
Object configuration with LDAP (Active Directory) as the identity backend has completed successfully.
Object authentication configuration completed successfully.

Let’s verify that Object with AD-TLS as the authentication backend has been configured as expected:

mmuserauth service list –data-access-method object

OBJECT access configuration : AD
PARAMETERS               VALUES
————————————————-
ENABLE_ANONYMOUS_BIND    false
ENABLE_SERVER_TLS        true
ENABLE_KS_SSL            false
USER_NAME                cn=Administrator,cn=Users,dc=adcons,dc=spectrum
SERVERS                  9.18.76.50
BASE_DN                  dc=adcons,dc=spectrum
USER_DN                  cn=users,dc=adcons,dc=spectrum
USER_OBJECTCLASS         organizationalPerson
USER_NAME_ATTRIB         sAMAccountName
USER_ID_ATTRIB           cn
USER_MAIL_ATTRIB         mail
USER_FILTER              none
ENABLE_KS_CASIGNING      false
KS_ADMIN_USER            Administrator

ENABLE_SERVER_TLS should be set to true.

We are now ready to execute openstack and swift commands!