This is a work in progress. Caution should be used when making financial or operational decisions off this data. Confirm all data directly from the Elastic Cloud Billing Dashboard in your account. This is not officially endorsed or supported by Elastic Co.
Pulls Elastic Cloud Billing information from the Billing API then sends it to an elasticsearch cluster
Author : Jeff Vestal - github.com/jeffvestal
This script will connect to Elastic Cloud's billing api and pull down various billin data:
Depending on the section, info can include:
- costs
- total
- hourly
- dts
- resources
- node type breakdown
Billing data is sent to an elasticsearch cluster where it can be used for analysis, searching, alerting, dashboard, magic
ndjson dashboard files are under ./dashboards/
- python 3.6+
- elasticsearch python library
- Elastic Cloud account
- Elastic Cloud API Key
- elasticsearch cluster to store billing data
- billing_api_key
- The Elastic Cloud API Key
- billing_es_id
- destination elasticsearch cloud_id
- billing_es_api
- destination elasticsearch api_key
- organization_delay = 60
- Delay between Account Level summary data pull
- deployment_inventory_delay = 3600
- Delay between Deployment Level summary data pull
- deployment_itemized_delay = 60
- Delay between Deployment Itemized Level data pull
- Set required environment variables
- ./ess-billing-ingest.py
By default data is written out to 3 separate indices:
- ess.billing
- Org level summary
- ess.billing.deployment
- Deployment level summary
- ess.billing.deployment.itemized
- Deployment itemized billing
Elasticsearch correctly auto-types each field. If a different type is required index templates can be set up ahead of time.
There is one runtime field added to the mapping to parse out the cloud region for the ess.billing.deployment.itemized
documents of bill.type:resources
This can be added to the index mapping at any time.
PUT ess.billing.deployment.itemized/_mapping
{
"runtime": {
"cloudregion": {
"type": "keyword",
"script": """
if (doc["bill.type.keyword"].value == "resources") {
String cloudregion=grok('%{WORD:provider}\\.%{WORD:node_type}\\.%{WORD:nothing}\\.%{WORD:nothing}-%{DATA:cloudregion}_').extract(doc["sku.keyword"].value)?.cloudregion;
if (cloudregion != null) emit(cloudregion);
}
"""
}
}
Currently ILM is not auto-configured, so it is up to the user to decide how they want to manage the lifecycle of the data