Admin Documentation
The OpenStack Compute service allows you to control an
Infrastructure-as-a-Service (IaaS) cloud computing platform. It gives you
control over instances and networks, and allows you to manage access to the
cloud through users and projects.
Compute does not include virtualization software. Instead, it defines drivers
that interact with underlying virtualization mechanisms that run on your host
operating system, and exposes functionality over a web-based API.
Overview
To effectively administer compute, you must understand how the different
installed nodes interact with each other. Compute can be installed in many
different ways using multiple servers, but generally multiple compute nodes
control the virtual servers and a cloud controller node contains the remaining
Compute services.
The Compute cloud works using a series of daemon processes named nova-*
that exist persistently on the host machine. These binaries can all run on the
same machine or be spread out on multiple boxes in a large deployment. The
responsibilities of services and drivers are:
Services
- nova-api-metadata
A server daemon that serves the Nova Metadata API.
- nova-api-os-compute
A server daemon that serves the Nova OpenStack Compute API.
- nova-api
A server daemon that serves the metadata and compute APIs in separate
greenthreads.
- nova-compute
Manages virtual machines. Loads a Service object, and exposes the public
methods on ComputeManager through a Remote Procedure Call (RPC).
- nova-conductor
Provides database-access support for compute nodes (thereby reducing security
risks).
- nova-scheduler
Dispatches requests for new virtual machines to the correct node.
- nova-novncproxy
Provides a VNC proxy for browsers, allowing VNC consoles to access virtual
machines.
- nova-spicehtml5proxy
Provides a SPICE proxy for browsers, allowing SPICE consoles to access
virtual machines.
- nova-serialproxy
Provides a serial console proxy, allowing users to access a virtual machine’s
serial console.
The architecture is covered in much greater detail in
Nova System Architecture.
Note
Some services have drivers that change how the service implements its core
functionality. For example, the nova-compute
service supports drivers
that let you choose which hypervisor type it can use.
Deployment Considerations
There is information you might want to consider before doing your deployment,
especially if it is going to be a larger deployment. For smaller deployments
the defaults from the install guide will be sufficient.
Compute Driver Features Supported: While the majority of nova deployments use
libvirt/kvm, you can use nova with other compute drivers. Nova attempts to
provide a unified feature set across these, however, not all features are
implemented on all backends, and not all features are equally well tested.
Feature Support by Use Case: A view of
what features each driver supports based on what’s important to some large
use cases (General Purpose Cloud, NFV Cloud, HPC Cloud).
Feature Support full list: A detailed dive through
features in each compute driver backend.
Cells v2 configuration: For large deployments, cells v2
cells allow sharding of your compute environment. Upfront planning is key to
a successful cells v2 layout.
Availablity Zones: Availability Zones are
an end-user visible logical abstraction for partitioning a cloud without
knowing the physical infrastructure.
:placement-doc:`Placement service <>`: Overview of the placement
service, including how it fits in with the rest of nova.
Running nova-api on wsgi: Considerations for using a real
WSGI container instead of the baked-in eventlet web server.
Basic configuration
Once you have an OpenStack deployment up and running, you will want to manage
it. The below guides cover everything from creating initial flavor and image to
log management and live migration of instances.
Quotas: Managing project quotas in nova.
Scheduling: How the scheduler is
configured, and how that will impact where compute instances land in your
environment. If you are seeing unexpected distribution of compute instances
in your hosts, you’ll want to dive into this configuration.
Exposing custom metadata to compute instances: How
and when you might want to extend the basic metadata exposed to compute
instances (either via metadata server or config drive) for your specific
purposes.
Advanced configuration
OpenStack clouds run on platforms that differ greatly in the capabilities that
they provide. By default, the Compute service seeks to abstract the underlying
hardware that it runs on, rather than exposing specifics about the underlying
host platforms. This abstraction manifests itself in many ways. For example,
rather than exposing the types and topologies of CPUs running on hosts, the
service exposes a number of generic CPUs (virtual CPUs, or vCPUs) and allows
for overcommitting of these. In a similar manner, rather than exposing the
individual types of network devices available on hosts, generic
software-powered network ports are provided. These features are designed to
allow high resource utilization and allows the service to provide a generic
cost-effective and highly scalable cloud upon which to build applications.
This abstraction is beneficial for most workloads. However, there are some
workloads where determinism and per-instance performance are important, if not
vital. In these cases, instances can be expected to deliver near-native
performance. The Compute service provides features to improve individual
instance for these kind of workloads.
Important
In deployments older than Train, or in mixed Stein/Train deployments with a
rolling upgrade in progress, unless specifically
enabled
, live migration is not
possible for instances with a NUMA topology when using the libvirt
driver. A NUMA topology may be specified explicitly or can be added
implicitly due to the use of CPU pinning or huge pages. Refer to bug
#1289064 for more information. As of Train, live migration of instances
with a NUMA topology when using the libvirt driver is fully supported.
Maintenance
Once you are running nova, the following information is extremely useful.