The following figure describes the high level architecture of the monitoring system.
Data is collected through ceilometer (where customized pollster have been developed) from each node. Relevant data is sent to Monasca on the master node. Data is stored and eventually passed to the FIWARE Big Data GE (Cosmos) for aggregation and analysis. Sanity Check tool also send data to Monasca. Finally Infographic (but also other clients) retrieve the data through Monasca.
This github repository contains all the pollsters and the additional customization that Infrastructure Owners (IOs) have to perform. IOs have to customize the standard ceilometer installation, by adding some pollsters or by editing the configuration file.
Some additional information about ceilometer: it is a tool created in order to handle the Telemetry requirements of an OpenStack environment (this includes use cases such as metering, monitoring, and alarming to name a few).
_Figure taken from [ceilometer documentation](http://docs.openstack.org/developer/ceilometer/architecture.html)_After the installation/configuration, IOs should be able to obtain information about their Openstack installation directly from ceilometer:
- region
- image (not needed in Kilo)
- host service (nova, glance, cinder, ... )
- host
- vm (not needed in Kilo)
The installation/configuration is divided into two main parts
- the installation procedure in the central node
- the installation procedure on the compute nodes
On the first side only the main pollsters will be installed, in the second one only the compute agent pollsters will be installed
Please follow these steps:
- Open the file
/etc/ceilometer/ceilometer.conf
, at the end of it add these rows (with your region values):
[region]
latitude=1.1
longitude=12.2
location=IT
netlist=net04_ext,net05_ext
ram_allocation_ratio=1.5
cpu_allocation_ratio=16
Replace the values with your Openstack installation information (where netlist is the list of the names of your external networks).
- Open this folder and copy the region folder from the github repository
cd /usr/lib/python2.7/dist-packages/ceilometer
After that, you have to see the region folder and its content:
ls /usr/lib/python2.7/dist-packages/ceilometer/region
__init__.py
region.py
- Edit the ceilometer configuration file
/usr/lib/python2.7/dist-packages/ceilometer-2015.1.1.egg-info/entry_points.txt
find the[ceilometer.poll.central]
section and add this row:
region = ceilometer.region.region:RegionPollster
- Restart the central pollster agent i.e.:
- crm resource restart p_ceilometer-agent-central
- OR service ceilometer-agent-central restart (if you are not using HA)
- Check if ceilometer is able to see the information about the region (remember to replace the RegionOne with your region name):
#ceilometer resource-list -q resource_id=RegionOne
+-------------+-----------+---------+------------+
| Resource ID | Source | User ID | Project ID |
+-------------+-----------+---------+------------+
| RegionOne | openstack | None | None |
+-------------+-----------+---------+------------+
# ceilometer resource-show RegionOne
+-------------+------------------------------------------+
| Property | Value |
+-------------+------------------------------------------+
| metadata | {'name': 'RegionOne', 'longitude': ....} |
| project_id | None |
| resource_id | RegionOne |
| source | openstack |
| user_id | None |
+-------------+------------------------------------------+
NOT NEEDED IF YOU HAVE A CEILOMETER FOR OPENSTACK KILO
The pollster for the images entity is already provided by a standard installation of ceilometer. Check if it is enabled in the configuration file:
- open the file:
/usr/lib/python2.7/dist-packages/ceilometer-2015.1.1.egg-info/entry_points.txt
check under[ceilometer.notification]
image = ceilometer.image.notifications:Image
- Check if one of your images is available and provided with all the needed information
#ceilometer meter-list | grep image
+-------+-------+-------+----------------+---------+------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+-------+-------+-------+----------------+---------+------------+
| image | gauge | image | aa-bb-cc-dd-ee | None | 0000000000 |
+-------+-------+-------+----------------+---------+------------+
TO DO -we have to understand how monasca already manage these checks!-
- Add these rows to the file
/etc/nova/nova.conf
in the section[DEFAULT]
and restart the nova-compute service
compute_monitors = ComputeDriverCPUMonitor
notification_driver = messagingv2
- Copy the host.py file from the compute_pollster folder into this folder
/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters
- enable the pollster by adding the following row inside the file
/usr/lib/python2.7/dist-packages/ceilometer-2015.1.1.egg-info/entry_points.txt
and under the section[ceilometer.poll.compute]
:
compute.info = ceilometer.compute.pollsters.host:HostPollster
- Restart the compute agent and check if you are able to see the host information inside your ceilometer (pay attention to the compute.node.cpu.percent, this is linked to the nova configuration)
- service nova-compute restart
- service ceilometer-agent-compute restart
#ceilometer meter-list | grep compute.node
+--------------------------+-------+------+---------------------------+---------+------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+--------------------------+-------+------+---------------------------+---------+------------+
| compute.node.cpu.max | gauge | cpu | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.cpu.now | gauge | cpu | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.cpu.percent | gauge | % | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.cpu.tot | gauge | cpu | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.disk.max | gauge | GB | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.disk.now | gauge | GB | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.disk.tot | gauge | GB | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.ram.max | gauge | MB | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.ram.now | gauge | MB | node-2.aa.bb_node-2.aa.bb | None | None |
| compute.node.ram.tot | gauge | MB | node-2.aa.bb_node-2.aa.bb | None | None |
+--------------------------+-------+------+---------------------------+---------+------------+
NOT NEEDED IF YOU HAVE A CEILOMETER FOR OPENSTACK KILO
- Replace the inspector.py in the folder
/usr/lib/python2.7/dist-packages/ceilometer/compute/virt
with the one in the reposirtory at compute_pollster/virt/inspector.py - Replace the inspector.py in the folder
/usr/lib/python2.7/dist-packages/ceilometer/compute/virt/libvirt
with the one in the reposirtory at compute_pollster/virt/libvirt/inspector.py - Replace the memory.py file from the compute_pollster inside the folder
/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters
- Replace the disk.py file from the compute pollster inside the same folder
/usr/lib/python2.7/dist-packages/ceilometer/compute/pollsters
- enable the pollsters by adding the following rows inside the file
/usr/lib/python2.7/dist-packages/ceilometer-2015.1.1.egg-info/entry_points.txt
and under the section[ceilometer.poll.compute]
memory.usage = ceilometer.compute.pollsters.memory:MemoryUsagePollster
memory.resident = ceilometer.compute.pollsters.memory:MemoryResidentPollster
disk.capacity = ceilometer.compute.pollsters.disk:CapacityPollster
- Restart the compute agent and check if you are able to see the info about one of your VMs (disk and memory):
#ceilometer meter-list | grep 'disk.capacity'
+--------------------------+-------+------+---------------------------+---------+------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+--------------------------+-------+------+---------------------------+---------+------------+
| disk.capacity | gauge | B | aa-bb-cc-dd-ee | user1 | project1 |
+--------------------------+-------+------+---------------------------+---------+------------+
#ceilometer sample-list -m disk.capacity -q"resource_id=aa-bb-cc-dd-ee"
+----------------+---------------+-------+--------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+----------------+---------------+-------+--------------+------+---------------------+
| aa-bb-cc-dd-ee | disk.capacity | gauge | 3221225472.0 | B | 2015-11-11T15:38:43 |
+----------------+---------------+-------+--------------+------+---------------------+
#ceilometer meter-list | grep 'memory.'
+-----------------+-------+------+-----------------+---------+------------+
| Name | Type | Unit | Resource ID | User ID | Project ID |
+-----------------+-------+------+-----------------+---------+------------+
| memory | gauge | MB | aa-bb-cc-dd-ee | user1 | project1 |
| memory.resident | gauge | MB | aa-bb-cc-dd-ee | user1 | project1 |
| memory.usage | gauge | MB | aa-bb-cc-dd-ee | user1 | project1 |
+-----------------+-------+------+-----------------+---------+------------+
#ceilometer sample-list -m memory.usage -q"resource_id=aa-bb-cc-dd-ee"
+----------------+---------------+-------+------------+------+---------------------+
| Resource ID | Name | Type | Volume | Unit | Timestamp |
+----------------+---------------+-------+------------+------+---------------------+
| aa-bb-cc-dd-ee | memory.usage | gauge | 101.0 | MB | 2015-11-11T15:38:43 |
+----------------+---------------+-------+------------+------+---------------------+