• Aucun résultat trouvé

Deploying Instance Caging

Dans le document Expert Oracle RAC 12cExpert Oracle RAC 12c (Page 191-194)

Instance caging can be deployed by setting two database initialization parameters: the cpu_count and the resource manager on the instance. The following procedure explains the practical steps required to enable instance caging on the instance:

SQL> ALTER SYSTEM SET CPU_COUNT=2 SCOPE=BOTH SID='INSTANCE_NAME';

SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = default_plan|myplan SCOPE=BOTH SID='INSTANCE_NAME';

The first SQL statement restricts the database instance’s CPU resource consumption to 2, irrespective of the number of CPUs present on the local node; the instance CPU resource use will be limited to a maximum of 2 CPU

power. If the database is a policy-managed database and tends to swing between nodes, you need to carefully plan the cpu_count value considering the resource availability on those nodes mentioned in the server pool.

The second SQL statement enables the resource manager facility and activates either the default plan or a user defined resource plan. An additional background process, Virtual Scheduler for resource manager (VKRM), will be started upon enabling resource manager. To verify whether the CPU caging is enabled on an instance, use the following examples:

SQL> show parameter cpu_count

SQL> SELECT instance_caging FROM v$rsrc_plan WHERE is_top_plan = 'TRUE';

When instance caging is configured and enabled, it will restrict the CPU usage for the individual instances and will not cause excessive CPU usage.

After enabling the feature, the CPU usage can be monitored by querying the gv$rsrc_consumer_group and gv$rsrcmgrmetric_history dynamic views. The views will provide useful inputs about the instance caging usage.

It provides in minute detail the CPU consumption and throttling feedback for the past hour.

To disable instance caging, you simply need to reset the cpu_count parameter to 0 on the instance.

Note

■ cpu_count and resource_manager_plan are dynamic initialization parameters, which can be altered without having instance downtime. individual instances within a raC database can have different cpu_count values. however, frequent modification of the cpu_count parameter is not recommended.

Figure 7-8 shows how to partition a 16-CPU/core box (node) using instance caging to limit different instances on the node to a specific amount of CPU usage.

Figure 7-8. Example instance caging a 16-CPU/core box for consolidated RAC databases

Small- vs. Large-Scale Cluster Setups

It’s an open secret that most IT requirements are typically derived from business needs. To meet business

requirements, you will have either small- or large-scale cluster setups of your environments. This part of the chapter will focus and discuss some of the useful comparisons between small- and large-scale cluster setups and also address the complexity involved in large cluster setups.

It is pretty difficult to jump in and say straightaway that whether a small-scale cluster setup is better than a large-scale cluster setup, as the needs of one organization are totally dissimilar to those of another. However, once you thoroughly realize the benefits and risks of the two types of setup, you can then decide which is the best option to proceed with.

Typically, in any cluster setup, there is a huge degree of coordination in the CRS, ASM, and instance processes involved between the nodes.

Typically, a large-scale cluster setup is a complex environment with a lot of resources deployment. In a large-scale cluster setup, the following situations are anticipated::

When the current environment is configured with too many resources, for example, hundreds

of databases listeners, and application services, etc., across the nodes, there is a high probability that it might cause considerable delay starting up the resources automatically on a particular node upon node eviction. Irrespective of the number of resources configured or were running on the local node, on node reboot, it has to scan through the list of resources registered in the cluster, which might cause delay in starting things on the node.

Sometime it will be time consuming and bit difficult to gather the required information from

various logs across all nodes in a cluster when Clusterware related issues confronted.

Any ASM disk/diskgroup-related activity requires ASM communication and actions across all

ASM instances in the cluster.

If an ASM instance on a particular node suffers any operational issues, other ASM instances

across the nodes will be impacted; this might lead to performance degradation, instance crash, etc.

When shared storage LUNS are prepared, it must be made available across all cluster nodes.

For any reason, if one node lacks ownership or permission or couldn’t access the LUN(disk), the disk can’t be used to add to any diskgroup. It will be pretty difficult to track which node having issues.

Starting/stopping cluster stack from multiple nodes in parallel will lock GRD across the cluster

and might cause an issue bringing up the cluster stack subsequently on the nodes.

If you don’t have an optimal configuration in place for large-scale cluster implementation,

frequent node evictions can be anticipated every now and then.

It is likely to confront the upper limit for the maximum number of ASM diskgroups and disks

when a huge number of databases is deployed in the environment.

On the other hand, a smaller cluster setup with a lesser number of nodes will be easy to manage and will have less complexity. If it’s possible to have multiple small ranges of cluster setups in contrast to a large-scale setup, this is one of the best options, considering the complexity and effort required for the two types.

Dans le document Expert Oracle RAC 12cExpert Oracle RAC 12c (Page 191-194)