Installation

Installation in Python environment

Shaker is distributed as Python package and available through PyPi (https://pypi.org/project/pyshaker/).

$ pip install --user pyshaker

OpenStack Deployment

_images/architecture.svg

Requirements:

  • Computer where Shaker is executed should be routable from OpenStack instances and should have open port to accept connections from agents running on instances

For full features support it is advised to run Shaker by admin user. However with some limitations it works for non-admin user - see Running Shaker by non-admin user for details.

Base image

Automatic build in OpenStack

The base image can be built using shaker-image-builder tool.

$ shaker-image-builder

There are 2 modes available:

  • heat - using Heat template (requires Glance v1 for base image upload);
  • dib - using diskimage-builder elements (requires qemu-utils and debootstrap to build Ubuntu-based image).

By default the mode is selected automatically preferring heat if Glance API v1 is available. Created image is uploaded into Glance and made available for further executions of Shaker. For full list of parameters refer to shaker-image-builder.

Manual build with disk-image-builder

Shaker image can also be built using diskimage-builder tool.

  1. Install disk-image-builder. Refer to diskimage-builder installation
  2. Clone Shaker repo: git clone https://opendev.org/performa/shaker
  3. Add search path for diskimage-builder elements: export ELEMENTS_PATH=shaker/shaker/resources/image_elements
  4. Build the image based on Ubuntu Xenial: disk-image-create -o shaker-image.qcow2 ubuntu vm shaker
  5. Upload image into Glance: openstack image create --public --file shaker-image.qcow2 --disk-format qcow2 shaker-image
  6. Create flavor: openstack flavor create --ram 512 --disk 3 --vcpus 1 shaker-flavor

Running Shaker by non-admin user

While the full feature set is available when Shaker is run by admin user, it works with some limitations for non-admin user too.

Image builder limitations

Image builder requires flavor name to be specified via command line parameter –flavor-name. Create flavor prior running Shaker, or choose one that satisfies instance template requirements. For Ubuntu-based image the requirement is 512 Mb RAM, 3 Gb disk and 1 CPU

Execution limitations

Non-admin user has no permissions to list compute nodes and to deploy instances to particular compute nodes.

When instances need to be deployed on low number of compute nodes it is possible to use server groups and specify anti-affinity policy within them. Note however that server group size is limited by quota_server_group_members parameter in nova.conf. The following is part of Heat template adds server groups.

Add to resources section:

server_group:
  type: OS::Nova::ServerGroup
  properties:
    name: {{ unique }}_server_group
    policies: [ 'anti-affinity' ]

Add attribute to server definition:

scheduler_hints:
  group: { get_resource: server_group }

The similar patch is needed to implement dense scenarios. The difference is in server group policy, it should be ‘affinity’.

Alternative approach is to specify number of compute nodes. Note that the number must always be specified. If Nova distributes instances evenly (or with normal random distribution) then the chances that instances are placed on unique nodes are quite high (well, there will be collisions due to https://en.wikipedia.org/wiki/Birthday_problem, so expect that number of unique pair will be lower than specified number of compute nodes).

Non-OpenStack Deployment (aka Spot mode)

To run scenarios against remote nodes (shaker-spot command) install shaker on the local host. Make sure all necessary tools are installed too. Refer to Spot Scenarios for more details.

Run Shaker against OpenStack deployed by Fuel-CCP on Kubernetes

Shaker can be run in Kubernetes environment and can execute scenarios against OpenStack deployed by Fuel-CCP tool.

Shaker app consists of service:

apiVersion: v1
kind: Service
metadata:
  name: shaker
spec:
  ports:
  - nodePort: 31999
    port: 31999
    protocol: TCP
    targetPort: 31999
  selector:
    app: shaker
  type: NodePort

and pod:

apiVersion: v1
kind: Pod
metadata:
  name: shaker
  labels:
    app: shaker
spec:
  containers:
  - args:
    - --debug
    - --nocleanup
    env:
    - name: OS_USERNAME
      value: admin
    - name: OS_PASSWORD
      value: password
    - name: OS_PROJECT_NAME
      value: admin
    - name: OS_AUTH_URL
      value: http://keystone.ccp:5000/
    - name: SHAKER_SCENARIO
      value: openstack/perf_l2
    - name: SHAKER_SERVER_ENDPOINT
      value: 172.20.9.7:31999
    image: performa/shaker
    imagePullPolicy: Always
    name: shaker
    securityContext:
      privileged: false
    volumeMounts:
    - mountPath: /artifacts
      name: artifacts
  dnsPolicy: ClusterFirst
  restartPolicy: Never
  volumes:
  - name: artifacts
    hostPath:
      path: /tmp

You may need to change values for variables defined in config files:

  • SHAKER_SERVER_ENDPOINT should point to external address of Kubernetes cluster, and OpenStack instances must have access to it
  • OS_*** parameters describe connection to Keystone endpoint
  • SHAKER_SCENARIO needs to be altered to run the needed scenario
  • Pod is configured to write logs into /tmp on the node that hosts the pod
  • port, nodePort and targetPort must be equal and not to conflict with other exposed services