Last updated：2020-05-19 20:18:08
|English Term||Acronym/ abbreviation||Chinese Term||Description|
|Kingsoft MapReduce||KMR||Kingsoft MapReduce Cluster on Kingsoft Cloud Platform, which provides services externally via Web|
|KMR cluster||Cluster||KMR集群||Hadoop cluster consisting of many Kingsoft Cloud host instances|
|KMR Master Node||Master||主节点||It is used for the cluster management and distributes computing programs and raw data sets to core instances. In addition, it tracks the execution status of each computing job and monitors the running status of the instance. KMR master node corresponds to the master node of Hadoop system. A KMR cluster has only one master node.|
|KMR Core Node||Core||核心节点||It is used mainly to perform the cluster computing jobs, and is also used as the data node in the Hadoop Distributed File System to store the data. KMR core node corresponds to the slave node of Hadoop system. A KMR cluster can have two or more core nodes.|
|KMR Job||Job||作业||A job is a work unit submitted to the cluster. A job may contain one or more Hadoop tasks, or it may contain instructions to install or configure an application. You can submit up to 256 jobs to a cluster.|
|SSH KEY||SSH密钥||It is SSH public key uploaded by the user on the console.|
|Kingsoft standard storage service||KS3||云存储||Kingsoft standard storage service|
|Hadoop file system||HDFS||Hadoop Distributed File System (HDFS) is a distributed, scalable file system used by Hadoop.|
|MapReduce||MR||MapReduce is a programming model for the distributed computing, and is used for parallel operations of large-scale data sets.|
|Hadoop||Apache Hadoop is an open source Java software framework, and it supports the processing of a large amount of data across a set of servers. It can run on one or thousands of servers. Hadoop uses a programming model called MapReduce to distribute processing jobs across servers. In addition, it implements a distributed file system called HDFS to store data across servers.|
|Hue||Hue is an open source Apache Hadoop UI system. Through Hue, we can interact with Hadoop cluster on the web console within the browser to analyze and process the data, such as manipulating the data on HDFS, running MapReduce Job, executing Hive SQL statements, browsing Hbase database and others.|
Did you find the above information helpful?
Please give us your feedback.
Thank you for your feedback.