Skip to content

Guance Integration Considerations

Regarding this series of script packages, please read the following precautions and explanations before use.

1. Fixed Cloud Provider Tags

When using this series of script packages to report data to Guance, by default, a cloud_provider tag corresponding to the cloud provider will be added to distinguish between different cloud providers.

Therefore, in the account configuration's extra_tags, do not add tags with the Key cloud_provider.

The specific cloud platforms and their corresponding fixed tags are as follows:

Belonging Cloud Platform Tag Value
Alibaba Cloud "aliyun"
AWS "AWS"
Tencent Cloud "tencentcloud"
Huawei Cloud "huaweicloud"

2. Supplementary Linkage for Cloud Monitoring Data

Generally, the cloud monitoring products of each cloud provider only provide the ID of the monitored object and its monitoring metrics information, such as: ECS instance ID and CPU usage rate. However, they do not simultaneously return other information like instance names.

Therefore, when only enabling collectors for cloud monitoring, while it is possible to achieve the purposes of viewing and alerting on monitoring data, distinguishing different objects solely by ID can be very unintuitive.

Thus, when cloud monitoring collectors are enabled alongside corresponding custom object collectors, the system will automatically link based on IDs and other data, attaching more information about instances to the cloud monitoring data.

Since custom object information must first be known to enable linkage within cloud monitoring collectors, it is generally recommended to place the cloud monitoring collector at the end of the list, such as:

Python
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# Create collectors
collectors = [
    aliyun_ecs.DataCollector(account, common_aliyun_configs),
    aliyun_rds.DataCollector(account, common_aliyun_configs),
    aliyun_slb.DataCollector(account, common_aliyun_configs),
    aliyun_oss.DataCollector(account, common_aliyun_configs),

    # Cloud monitoring collectors are usually placed at the end
    aliyun_monitor.DataCollector(account, monitor_collector_configs),
]

For specific linkage effects, please refer to the documentation for specific collectors.

3. Collection Frequency Limitations for Different Collectors

Even if a shorter start interval (e.g., 1 minute) is configured in the "Scheduled Tasks (Old Version: Automatic Trigger Configuration)," each collector internally sets its own minimum internal collection interval based on its business characteristics.

The start interval is different from the collection interval; each 'start' does not necessarily result in actual 'collection.'

For most collectors that report custom objects, the minimum internal collection interval is 15 minutes. If started continuously within 15 minutes, the program will automatically skip the actual collection process to avoid excessive API calls.

For cloud monitoring collectors, the minimum internal collection interval is 1 minute, but please set a reasonable start interval (e.g., 5 minutes) according to actual needs to avoid excessive API calls.

4. Data Collection Delay

This series of scripts may experience some delays during the collection process.

For custom object collectors, due to the existence of the minimum internal collection interval, when new purchases, releases of instances, or modifications to instance names occur, it may take up to the next actual collection execution before changes reflect in Guance.

For cloud monitoring collectors, when involving "supplementary linkage for cloud monitoring data," since the supplementary tags come from the collection processing of custom object collectors, they are also subject to the minimum internal collection interval of custom object collectors mentioned above.

5. Accuracy of Cloud Monitoring Data

For cloud monitoring collectors, since the data is actually obtained through the APIs of cloud providers after aggregation, rather than real raw data,

the cloud monitoring data collected by this series of collectors is actually "secondary data" and may differ from what is seen in the cloud provider's console.

6. Possible Massive API Calls and Data

If certain collectors are configured to obtain all data, it can result in massive API queries, easily triggering throttling limits on the cloud platform, or even leading to AK being blocked.

Therefore, when configuring collectors, one should fully consider the specific product API call rules and the amount of data generated in Guance.

The author of this script and related parties shall not be held responsible for any economic losses caused

7. Time Consumption When Collecting Multiple Products Simultaneously

When configuring multiple collectors in a single function, due to the need for numerous requests to the cloud provider's APIs, the execution time may be relatively long.

For example, with Alibaba Cloud, the following collection plan:

Collector Configuration
Cloud Monitoring All metrics for ECS, RDS, SLB, OSS
ECS, RDS, SLB, OSS Limited to Hangzhou region

A single run takes approximately 1 to 2 minutes.

Therefore, generally speaking, when collecting multiple products in a single function, it is necessary to specify a longer timeout time timeout (in seconds), such as:

Python
1
2
3
4
# Due to the large amount of data collected, a larger timeout time (in seconds) needs to be specified here for the function.
@DFF.API('Execute Cloud Asset Synchronization', timeout=300)
def run():
    # Specific code omitted

The parameter timeout can be specified up to 3600 seconds (i.e., 1 hour). It is generally recommended to set it to about 1.5 times the required duration. An overly long timeout might cause unnecessary congestion in the execution queue.