Nova uses a quota system for setting limits on resources such as number of instances or amount of CPU that a specific project or user can use.
Quotas are enforced by making a claim, or reservation, on resources when a request is made, such as creating a new server. If the claim fails, the request is rejected. If the reservation succeeds then the operation progresses until such a point that the reservation is either converted into usage (the operation was successful) or rolled back (the operation failed).
Typically the quota reservation is made in the nova-api service and the usage or rollback is performed in the nova-compute service, at least when dealing with a server creation or move operation.
Quota limits and usage can be retrieved via the limits
REST API.
When calculating limits for a given resource and tenant, the following checks are made in order:
Depending on the resource, is there a tenant-specific limit on the resource in either the quotas or project_user_quotas tables in the database? If so, use that as the limit. You can create these resources by doing:
openstack quota set --instances 5 <project>
Check to see if there is a hard limit for the given resource in the quota_classes table in the database for the default quota class. If so, use that as the limit. You can modify the default quota limit for a resource by doing:
openstack quota set --class --instances 5 default
If the above does not provide a resource limit, then rely on the quota_*
configuration options for the default limit.
Note
The API sets the limit in the quota_classes table. Once a default limit is set via the default quota class, that takes precedence over any changes to that resource limit in the configuration options. In other words, once you’ve changed things via the API, you either have to keep those synchronized with the configuration values or remove the default limit from the database manually as there is no REST API for removing quota class values from the database.
Starting in the Train (20.0.0) release, it is possible to configure quota usage counting of cores and ram from the placement service and instances from instance mappings in the API database instead of counting resources from cell databases. This makes quota usage counting resilient in the presence of down or poor-performing cells.
Quota usage counting from placement is opt-in via configuration option:
[quota]
count_usage_from_placement = True
There are some things to note when opting in to counting quota usage from placement:
quota.count_usage_from_placement
configuration option
to True
.quota.count_usage_from_placement
.populate_queued_for_delete
and populate_user_id
online data
migrations must be completed before usage can be counted from placement.
Until the data migration is complete, the system will fall back to legacy
quota usage counting from cell databases depending on the result of an EXISTS
database query during each quota check, if
quota.count_usage_from_placement
is set to True
.
Operators who want to avoid the performance hit from the EXISTS queries
should wait to set the quota.count_usage_from_placement
configuration option to True
until after they have completed their online
data migrations via nova-manage db online_data_migrations
.ERROR
state. A
server in ERROR
state that has never been scheduled to a compute host
will not have placement allocations, so it will not consume quota usage for
cores and ram.SHELVED_OFFLOADED
state. A
server in SHELVED_OFFLOADED
state will not have placement allocations, so
it will not consume quota usage for cores and ram. Note that because of this,
it will be possible for a request to unshelve a server to be rejected if the
user does not have enough quota available to support the cores and ram needed
by the server to be unshelved.TODO: talk about quotas getting out of sync and how to recover
TODO: talk about quotas in the resource counting spec and nested quotas
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.