Last updated：2020-12-02 17:22:35
KCS is highly scalable and allows you to scale your instances in the KCS console based on your business requirements. The scaling process does not affect your business.
All KCS instances adopt a hot standby mechanism to ensure high availability. The primary and secondary nodes reside on different servers. Data is automatically synchronized between the primary and secondary nodes. When the primary node encounters a fault, the service is automatically switched to the secondary node. The failover process is transparent to users. The process from the moment when a fault occurs all the way to when failover is completed, including fault detection and identification, lasts less than a minute.
KCS supports VPCs to ensure data security. You can create your own private network based on a VPC. If your KCS instances reside on the basic network, you must configure security group rules to control which KEC instances can access your KCS instances. By default, you can access a KCS instance only by using its private IP address over the internal network, which further enhances data security.
KCS provides an in-memory store, where all data is accessed in memory. Compared with a disk storage, KCS offers a significantly better I/O performance. In addition to high performance, KCS supports data persistence to ensure high data reliability.
KCS provides a graphical console for you to manage instances. For example, you can create instances, modify parameters, and scale instances in the console. KCS also provides detailed monitoring metrics, such as input/output operations per second (IOPS) and queries per second (QPS), that allow you to check the status of your instances at any time. KCS is a fully-managed service. You no longer need to perform operations and management (O&M) tasks such as hardware installation and database deployment. This helps you save the use and management cost.
Did you find the above information helpful?
Please give us your feedback.
Thank you for your feedback.