操作场景
HDFS、YARN、Hive等服务含有大量配置项,为便于服务使用,您可使用自定义服务配置功能,在创建集群或新增服务前修改或添加配置项。
前提条件
当前支持用户对HDFS、Hive、Spark、HBase、YARN、Kyuubi的部分文件进行修改与添加配置项。其他组件服务暂不支持使用自定义服务配置功能。
操作步骤
1、支持用户在订购环节对所选服务进行配置修改与添加。
1)在翼MR订购页-软件配置环节选择所需组件服务。
2)在软件配置环节,开启自定义服务配置开关。
3)在输入框中添加JSON 格式的配置信息。
2、支持用户在新增集群服务环节对所选服务的配置进行修改与添加。
1)在翼MR控制台-集群服务管理页面选择需要新增部署的组件服务。
2)在弹窗中开启自定义服务配置开关。
3)在输入框中添加JSON 格式的配置信息。
配置说明
在创建集群或新增集群服务过程中,可通过自定义服务配置功能,添加JSON格式的配置文件,覆盖或添加集群服务的默认参数。JSON文件的内容示例如下:
[
{
"applicationName":"YARN",
"configFileName":"mapred-site.xml",
"configItemKey":"mapreduce.task.timeout",
"configItemValue":"200000"
},
{
"applicationName":"HDFS",
"configFileName":"hdfs-site.xml",
"configItemKey":"dfs.replication",
"configItemValue":"2"
}
]参数含义说明:
参数名称 | 说明 |
| applicationName | 服务的名称。 |
| configFileName | 配置文件的名称。 |
| configItemKey | 配置项的名称。 |
| configItemValue | 该配置需要设置的具体内容。 |
针对不同文件自定义服务配置的修改方式有所不同
1、全文修改
部分配置文件只能进行全文覆盖。用户需要使用CONF_CONTENT作为configItemKey,在configItemValue中输入转义后的配置文件内容,输入的配置文件内容会在部署后自动覆盖到服务对应的配置上。文件内容的字符串需要用户对常见字符进行转义,如双引号、单引号、换行符、反斜杠等。需要通过全文覆盖方式进行配置修改的文件如下:
服务名称 | 配置文件 |
| HBase | hbase-env.sh |
| log4j.properties | |
| HDFS | hadoop-env.sh |
YARN | mapred-env.sh |
| yarn-env.sh | |
Hive | hive-env.sh |
| hive-log4j2.properties | |
| Kyuubi | kyuubi-env.sh |
2、按配置项修改
部分配置文件支持按配置项进行修改,支持修改的配置项如下:
说明
1)Kerberos的krb5.conf文件,只支持使用kerberosRealm作为configItemKey来配realm,暂不支持配置其它配置项。
2)Spark的spark-env.sh文件中,custom_spark-env配置项需要对参照上文全文修改的方式,对文件内容进行转义。
服务名称 | 配置文件 | 配置项 |
| HBase | hbase-site.xml | hbase.cluster.distributed |
| hbase.coprocessor.abortonerror | ||
| hbase.coprocessor.master.classes | ||
| hbase.coprocessor.region.classes | ||
| hbase.master.kerberos.principal | ||
| hbase.master.keytab.file | ||
| hbase.master.loadbalancer.class | ||
| hbase.quota.enabled | ||
| hbase.quota.refresh.period | ||
| hbase.regionserver.kerberos.principal | ||
| hbase.regionserver.keytab.file | ||
| hbase.rest.authentication.kerberos.keytab | ||
| hbase.rest.authentication.kerberos.principal | ||
| hbase.rest.kerberos.principal | ||
| hbase.rootdir | ||
| hbase.security.authentication | ||
| hbase.security.authorization | ||
| hbase.superuser | ||
| hbase.thrift.kerberos.principal | ||
| hbase.thrift.keytab.file | ||
| hbase.tmp.dir | ||
| hbase.unsafe.stream.capability.enforce | ||
| hbase.wal.provider | ||
| hbase.zookeeper.property.clientPort | ||
| hbase.zookeeper.quorum | ||
| zookeeper.znode.parent | ||
| HDFS | core-site.xml | dfs.datanode.cached-dfsused.check.interval.ms |
| fs.AbstractFileSystem.jfs.impl | ||
| fs.defaultFS | ||
| fs.du.interval | ||
| fs.getspaceused.jitterMillis | ||
| fs.jfs.impl | ||
| fs.permissions.umask-mode | ||
| fs.trash.checkpoint.interval | ||
| fs.trash.interval | ||
| ha.failover-controller.new-active.rpc-timeout.m | ||
| ha.failover-controller.new-active.rpc-timeout.ms | ||
| ha.health-monitor.rpc-timeout.ms | ||
| ha.zookeeper.quorum | ||
| ha.zookeeper.session-timeout.ms | ||
| hadoop.caller.context.enabled | ||
| hadoop.kerberos.kinit.command | ||
| hadoop.proxyuser.hdfs.groups | ||
| hadoop.proxyuser.hdfs.hosts | ||
| hadoop.proxyuser.hive.groups | ||
| hadoop.proxyuser.hive.hosts | ||
| hadoop.proxyuser.HTTP.groups | ||
| hadoop.proxyuser.HTTP.hosts | ||
| hadoop.proxyuser.httpfs.groups | ||
| hadoop.proxyuser.httpfs.hosts | ||
| hadoop.proxyuser.yarn.groups | ||
| hadoop.proxyuser.yarn.hosts | ||
| hadoop.rpc.protection | ||
| hadoop.security.auth_to_local | ||
| hadoop.security.authentication | ||
| hadoop.security.authorization | ||
| hadoop.security.group.mapping | ||
| hadoop.security.group.mapping.ldap.base | ||
| hadoop.security.group.mapping.ldap.bind.password.file | ||
| hadoop.security.group.mapping.ldap.bind.user | ||
| hadoop.security.group.mapping.ldap.num.attempts | ||
| hadoop.security.group.mapping.ldap.num.attempts.before.failover | ||
| hadoop.security.group.mapping.ldap.posix.attr.uid.name | ||
| hadoop.security.group.mapping.ldap.search.attr.group.name | ||
| hadoop.security.group.mapping.ldap.search.attr.member | ||
| hadoop.security.group.mapping.ldap.search.filter.group | ||
| hadoop.security.group.mapping.ldap.search.filter.user | ||
| hadoop.security.group.mapping.provider.ldap4users | ||
| hadoop.security.group.mapping.provider.ldap4users.ldap.url | ||
| hadoop.security.group.mapping.provider.shell4services | ||
| hadoop.security.group.mapping.providers | ||
| hadoop.security.group.mapping.providers.combined | ||
| httpfs.proxyuser.mapred.groups | ||
| httpfs.proxyuser.mapred.hosts | ||
| io.compression.codec.lzo.class | ||
| io.file.buffer.size | ||
| ipc.client.connection.maxidletime | ||
| ipc.server.listen.queue.size | ||
| ipc.server.log.slow.rpc | ||
| jeekefs.cache-dir | ||
| jeekefs.cache-size | ||
| jeekefs.discover-nodes-url | ||
| jeekefs.meta | ||
| jeekefs.server-principal | ||
| topology.script.file.name | ||
| hdfs-site.xml | dfs.block.access.token.enable | |
| dfs.blockreport.incremental.intervalMsec | ||
| dfs.blockreport.initialDelay | ||
| dfs.blockreport.intervalMsec | ||
| dfs.blockreport.split.threshold | ||
| dfs.blocksize | ||
| dfs.client.failover.proxy.provider.ctyunns | ||
| dfs.client.read.shortcircuit | ||
| dfs.client.socket-timeout | ||
| dfs.cluster.administrators | ||
| dfs.datanode.address | ||
| dfs.datanode.cached-dfsused.check.interval.ms | ||
| dfs.datanode.data.dir | ||
| dfs.datanode.data.dir.perm | ||
| dfs.datanode.directoryscan.threads | ||
| dfs.datanode.du.reserved.calculator | ||
| dfs.datanode.du.reserved.pct | ||
| dfs.datanode.failed.volumes.tolerated | ||
| dfs.datanode.fileio.profiling.sampling.percentage | ||
| dfs.datanode.handler.count | ||
| dfs.datanode.http.address | ||
| dfs.datanode.kerberos.principal | ||
| dfs.datanode.keytab.file | ||
| dfs.datanode.max.transfer.threads | ||
| dfs.datanode.max.xcievers | ||
| dfs.datanode.peer.stats.enabled | ||
| dfs.domain.socket.path | ||
| dfs.encrypt.data.transfer.cipher.suites | ||
| dfs.ha.automatic-failover.enabled | ||
| dfs.ha.fencing.methods | ||
| dfs.ha.namenodes.ctyunns | ||
| dfs.hosts.exclude | ||
| dfs.image.transfer.bandwidthPerSec | ||
| dfs.internal.nameservices | ||
| dfs.journalnode.edits.dir.ctyunns | ||
| dfs.journalnode.http-address | ||
| dfs.journalnode.kerberos.internal.spnego.principal | ||
| dfs.journalnode.kerberos.principal | ||
| dfs.journalnode.keytab.file | ||
| dfs.journalnode.rpc-address | ||
| dfs.namenode.accesstime.precision | ||
| dfs.namenode.acls.enabled | ||
| dfs.namenode.audit.log.async | ||
| dfs.namenode.avoid.read.stale.datanode | ||
| dfs.namenode.avoid.write.stale.datanode | ||
| dfs.namenode.block.deletion.increment | ||
| dfs.namenode.checkpoint.dir | ||
| dfs.namenode.checkpoint.edits.dir | ||
| dfs.namenode.checkpoint.period | ||
| dfs.namenode.checkpoint.txns | ||
| dfs.namenode.deletefiles.limit | ||
| dfs.namenode.edit.log.autoroll.multiplier.threshold | ||
| dfs.namenode.fs-limits.max-directory-items | ||
| dfs.namenode.fslock.fair | ||
| dfs.namenode.handler.count | ||
| dfs.namenode.http-address.ctyunns.nn1 | ||
| dfs.namenode.http-address.ctyunns.nn2 | ||
| dfs.namenode.kerberos.internal.spnego.principal | ||
| dfs.namenode.kerberos.principal | ||
| dfs.namenode.keytab.file | ||
| dfs.namenode.lock.detailed-metrics.enabled | ||
| dfs.namenode.name.dir | ||
| dfs.namenode.name.dir.restore | ||
| dfs.namenode.quota.init-threads | ||
| dfs.namenode.rpc-address.ctyunns.nn1 | ||
| dfs.namenode.rpc-address.ctyunns.nn2 | ||
| dfs.namenode.safemode.threshold-pct | ||
| dfs.namenode.service.handler.count | ||
| dfs.namenode.servicerpc-address.ctyunns.nn1 | ||
| dfs.namenode.servicerpc-address.ctyunns.nn2 | ||
| dfs.namenode.shared.edits.dir.ctyunns | ||
| dfs.namenode.stale.datanode.interval | ||
| dfs.namenode.startup.delay.block.deletion.sec | ||
| dfs.namenode.support.allow.format | ||
| dfs.namenode.write.stale.datanode.ratio | ||
| dfs.nameservices | ||
| dfs.permissions.superusergroup | ||
| dfs.qjournal.select-input-streams.timeout.ms | ||
| dfs.qjournal.start-segment.timeout.ms | ||
| dfs.qjournal.write-txns.timeout.ms | ||
| dfs.replication | ||
| dfs.replication.max | ||
| dfs.web.authentication.kerberos.keytab | ||
| dfs.web.authentication.kerberos.principal | ||
| dfs.webhdfs.enabled | ||
| hadoop.caller.context.enabled | ||
| rpc.metrics.percentiles.intervals | ||
| rpc.metrics.quantile.enable | ||
| Hive | hive-site.xml | hive.auto.convert.join |
| hive.auto.convert.sortmerge.join | ||
| hive.auto.convert.sortmerge.join.to.mapjoin | ||
| hive.compactor.initiator.on | ||
| hive.default.fileformat | ||
| hive.default.fileformat.managed | ||
| hive.exec.compress.output | ||
| hive.exec.dynamic.partition | ||
| hive.exec.stagingdir | ||
| hive.execution.engine | ||
| hive.hook.proto.base-directory | ||
| hive.insert.into.multilevel.dirs | ||
| hive.limit.optimize.enable | ||
| hive.mapred.reduce.tasks.speculative.execution | ||
| hive.merge.mapredfiles | ||
| hive.metastore.authorization.storage.checks | ||
| hive.metastore.warehouse.dir | ||
| hive.metastore.warehouse.external.dir | ||
| hive.optimize.bucketmapjoin | ||
| hive.optimize.dynamic.partition.hashjoin | ||
| hive.optimize.index.filter | ||
| hive.optimize.metadataonly | ||
| hive.optimize.remove.identity.project | ||
| hive.server2.proxy.user | ||
| hive.stats.fetch.column.stats | ||
| hive.txn.strict.locking.mode | ||
| hive.update.last.access.time.interval | ||
| hive.user.install.directory | ||
| hive.vectorized.execution.mapjoin.minmax.enabled | ||
| hive.vectorized.execution.mapjoin.native.fast.hashtable.enabled | ||
| hive.vectorized.groupby.checkinterval | ||
| metastore.expression.proxy | ||
| Kyuubi | kyuubi-defaults.conf | kyuubi.backend.server.event.json.log.path |
| kyuubi.backend.server.event.loggers | ||
| kyuubi.delegation.token.renew.interval | ||
| kyuubi.ha.namespace | ||
| kyuubi.metrics.reporters | ||
| kyuubi.operation.getTables.ignoreTableProperties | ||
| kyuubi.session.engine.idle.timeout | ||
| kyuubi.session.idle.timeout | ||
| spark.master | ||
| spark.submit.deployMode | ||
| spark.yarn.queue | ||
| Spark | spark-defaults.conf | spark.driver.cores |
| spark.driver.extraJavaOptions | ||
| spark.driver.extraLibraryPath | ||
| spark.driver.maxResultSize | ||
| spark.driver.memory | ||
| spark.dynamicAllocation.enabled | ||
| spark.dynamicAllocation.initialExecutors | ||
| spark.dynamicAllocation.maxExecutors | ||
| spark.dynamicAllocation.minExecutors | ||
| spark.eventLog.dir | ||
| spark.executor.cores | ||
| spark.executor.extraJavaOptions | ||
| spark.executor.extraLibraryPath | ||
| spark.executor.heartbeatInterval | ||
| spark.executor.memory | ||
| spark.executorEnv.JAVA_HOME | ||
| spark.files.openCostInBytes | ||
| spark.hadoop.mapreduce.output.fileoutputformat.compress | ||
| spark.hadoop.mapreduce.output.fileoutputformat.compress.codec | ||
| spark.hadoop.yarn.timeline-service.enabled | ||
| spark.history.fs.cleaner.enabled | ||
| spark.history.fs.cleaner.interval | ||
| spark.history.fs.cleaner.maxAge | ||
| spark.history.fs.logDirectory | ||
| spark.history.kerberos.enabled | ||
| spark.history.kerberos.keytab | ||
| spark.history.store.maxDiskUsage | ||
| spark.history.ui.maxApplications | ||
| spark.history.ui.port | ||
| spark.io.compression.lz4.blockSize | ||
| spark.kryo.unsafe | ||
| spark.kryoserializer.buffer.max | ||
| spark.locality.wait | ||
| spark.master | ||
| spark.memory.offHeap.enabled | ||
| spark.memory.offHeap.size | ||
| spark.network.timeout | ||
| spark.port.maxRetries | ||
| spark.rdd.parallelListingThreshold | ||
| spark.reducer.maxSizeInFlight | ||
| spark.resultGetter.threads | ||
| spark.rpc.io.backLog | ||
| spark.scheduler.maxReqisteredResourcesWaitingTime | ||
| spark.shuffle.accurateBlockThreshold | ||
| spark.shuffle.file.buffer | ||
| spark.shuffle.io.connectionTimeout | ||
| spark.shuffle.manager | ||
| spark.shuffle.mapOutput.dispatcher.numThreads | ||
| spark.shuffle.memoryFraction | ||
| spark.shuffle.push.enabled | ||
| spark.shuffle.push.maxBlockSizeToPush | ||
| spark.shuffle.push.merge.finalizeThreads | ||
| spark.shuffle.push.mergersMinStaticThreshold | ||
| spark.shuffle.readHostLocalDisk | ||
| spark.shuffle.service.enabled | ||
| spark.shuffle.unsafe.file.output.buffer | ||
| spark.speculation | ||
| spark.speculation.interval | ||
| spark.speculation.minTaskRuntime | ||
| spark.speculation.multiplier | ||
| spark.speculation.quantile | ||
| spark.sql.adaptive.coalescePartitions.initialPartitionNum | ||
| spark.sql.adaptive.coalescePartitions.minPartitionNum | ||
| spark.sql.adaptive.enabled | ||
| spark.sql.adaptive.forceApply | ||
| spark.sql.adaptive.forceOptimizeSkewedJoin | ||
| spark.sql.adaptive.shuffle.targetPostShuffleInputSize | ||
| spark.sql.autoBroadcastJoinThreshold | ||
| spark.sql.catalog.spark_catalog | ||
| spark.sql.catalog.spark_catalog.type | ||
| spark.sql.cbo.joinReorder.enabled | ||
| spark.sql.extensions | ||
| spark.sql.files.maxPartitionBytes | ||
| spark.sql.files.openCostInBytes | ||
| spark.sql.finalStage.adaptive.advisoryPartitionSizeInBytes | ||
| spark.sql.finalStage.adaptive.coalescePartitions.minPartitionNum | ||
| spark.sql.finalStage.adaptive.skewJoin.skewedPartitionFactor | ||
| spark.sql.finalStage.adaptive.skewJoin.skewedPartitionThresholdInBytes | ||
| spark.sql.hive.convertMetastoreOrc | ||
| spark.sql.hive.dropPartitionByName.enabled | ||
| spark.sql.inMemoryColumnarStorage.batchSize | ||
| spark.sql.join.preferSortMergeJoin | ||
| spark.sql.legacy.charVarcharAsString | ||
| spark.sql.legacy.timeParserPolicy | ||
| spark.sql.optimizer.finalStageConfigIsolation.enabled | ||
| spark.sql.optimizer.inferRebalanceAndSortOrders.enabled | ||
| spark.sql.optimizer.insertRepartitionBeforeWriteIfNoShuffle.enabled | ||
| spark.sql.optimizer.inSetConversionThreshold | ||
| spark.sql.optimizer.runtime.bloomFilter.creationSideThreshold | ||
| spark.sql.optimizer.runtime.bloomFilter.enabled | ||
| spark.sql.optimizer.runtimeFilter.number.threshold | ||
| spark.sql.optimizer.runtimeFilter.semiJoinReduction.enabled | ||
| spark.sql.orc.aggregatePushdown | ||
| spark.sql.orc.columnarReaderBatchSize | ||
| spark.sql.orc.enableNestedColumnVectorizedReader | ||
| spark.sql.parquet.aggregatePushdown | ||
| spark.sql.parquet.columnarReaderBatchSize | ||
| spark.sql.parquet.enableNestedColumnVectorizedReader | ||
| spark.sql.parquet.pushdown.inFilterThreshold | ||
| spark.sql.query.table.file.max.count | ||
| spark.sql.query.table.file.max.length | ||
| spark.sql.query.table.partition.max.count | ||
| spark.sql.sessionWindow.buffer.in.memory.threshold | ||
| spark.sql.shuffle.partitions | ||
| spark.sql.sources.parallelPartitionDiscovery.parallelism | ||
| spark.sql.sources.parallelPartitionDiscovery.threshold | ||
| spark.sql.statistics.fallBackToHdfs | ||
| spark.sql.storeAssignmentPolicy | ||
| spark.sql.subquery.maxThreadThreshold | ||
| spark.sql.support.block.inferior.sql | ||
| spark.storage.decommission.shuffleBlocks.maxThreads | ||
| spark.task.reaper.enabled | ||
| spark.unsafe.sorter.spill.reader.buffer.size | ||
| spark.yarn.appMasterEnv.JAVA_HOME | ||
| spark.yarn.containerLauncherMaxThreads | ||
| spark.yarn.scheduler.heartbeat.interval-ms | ||
| spark.yarn.scheduler.initial-allocation.interval | ||
| spark-env.sh | custom_spark-env | |
| spark_engine | ||
| YARN | mapred-site.xml | mapreduce.application.classpath |
| mapreduce.cluster.acls.enabled | ||
| mapreduce.framework.name | ||
| mapreduce.job.acl-modify-job | ||
| mapreduce.job.counters.counter.name.max | ||
| mapreduce.job.counters.group.name.max | ||
| mapreduce.job.counters.groups.max | ||
| mapreduce.job.counters.max | ||
| mapreduce.jobhistory.admin.acl | ||
| mapreduce.jobhistory.bind-host | ||
| mapreduce.jobhistory.done-dir | ||
| mapreduce.jobhistory.http.policy | ||
| mapreduce.jobhistory.intermediate-done-dir | ||
| mapreduce.jobhistory.recovery.enable | ||
| mapreduce.jobhistory.recovery.store.leveldb.path | ||
| mapreduce.map.env | ||
| mapreduce.map.java.opts | ||
| mapreduce.map.log.level | ||
| mapreduce.map.memory.mb | ||
| mapreduce.map.output.compress | ||
| mapreduce.map.output.compress.codec | ||
| mapreduce.map.sort.spill.percent | ||
| mapreduce.map.speculative | ||
| mapreduce.output.fileoutputformat.compress | ||
| mapreduce.output.fileoutputformat.compress.codec | ||
| mapreduce.reduce.env | ||
| mapreduce.reduce.input.buffer.percent | ||
| mapreduce.reduce.java.opts | ||
| mapreduce.reduce.log.level | ||
| mapreduce.reduce.memory.mb | ||
| mapreduce.reduce.shuffle.fetch.retry.enabled | ||
| mapreduce.reduce.shuffle.fetch.retry.interval-ms | ||
| mapreduce.reduce.shuffle.fetch.retry.timeout-ms | ||
| mapreduce.reduce.shuffle.input.buffer.percent | ||
| mapreduce.reduce.shuffle.merge.percent | ||
| mapreduce.reduce.shuffle.parallelcopies | ||
| mapreduce.reduce.speculative | ||
| mapreduce.shuffle.port | ||
| mapreduce.task.io.sort.factor | ||
| mapreduce.task.io.sort.mb | ||
| mapreduce.task.timeout | ||
| yarn.app.mapreduce.am.env | ||
| yarn.app.mapreduce.am.log.level | ||
| yarn.app.mapreduce.am.resource.mb | ||
| yarn.app.mapreduce.am.staging-dir | ||
yarn-site.xml | hadoop.http.authentication.simple.anonymous.allowed | |
| hadoop.http.filter.initializers | ||
| hadoop.registry.client.auth | ||
| yarn.acl.enable | ||
| yarn.log-aggregation.retain-seconds | ||
| yarn.log-aggregation-enable | ||
| yarn.node-labels.enabled | ||
| yarn.node-labels.fs-store.root-dir | ||
| yarn.nodemanager.address | ||
| yarn.nodemanager.container-executor.class | ||
| yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage | ||
| yarn.nodemanager.localizer.cache.target-size-mb | ||
| yarn.nodemanager.localizer.client.thread-count | ||
| yarn.nodemanager.localizer.fetch.thread-count | ||
| yarn.nodemanager.log.retain-seconds | ||
| yarn.nodemanager.log-aggregation.compression-type | ||
| yarn.nodemanager.log-aggregation.debug-enabled | ||
| yarn.nodemanager.log-aggregation.num-log-files-per-app | ||
| yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds | ||
| yarn.nodemanager.recovery.dir | ||
| yarn.nodemanager.recovery.enabled | ||
| yarn.nodemanager.recovery.supervised | ||
| yarn.nodemanager.remote-app-log-dir | ||
| yarn.nodemanager.remote-app-log-dir-suffix | ||
| yarn.nodemanager.resource.cpu-vcores | ||
| yarn.nodemanager.resource.memory-mb | ||
| yarn.nodemanager.resource.percentage-physical-cpu-limit | ||
| yarn.nodemanager.resourcemanager.connect.wait.secs | ||
| yarn.nodemanager.vmem-check-enabled | ||
| yarn.nodemanager.vmem-pmem-ratio | ||
| yarn.nodemanager.webapp.cross-origin.enabled | ||
| yarn.resourcemanager.cluster-id | ||
| yarn.resourcemanager.fusing.enable | ||
| yarn.resourcemanager.fusing-max-api-get-jobs | ||
| yarn.resourcemanager.ha.rm-ids | ||
| yarn.resourcemanager.hostname.rm1 | ||
| yarn.resourcemanager.hostname.rm2 | ||
| yarn.resourcemanager.max-completed-applications | ||
| yarn.resourcemanager.recovery.enabled | ||
| yarn.resourcemanager.scheduler.autocorrect.container.allocation | ||
| yarn.resourcemanager.scheduler.class | ||
| yarn.resourcemanager.scheduler.monitor.enable | ||
| yarn.resourcemanager.store.class | ||
| yarn.resourcemanager.webapp.address.rm1 | ||
| yarn.resourcemanager.webapp.address.rm2 | ||
| yarn.resourcemanager.webapp.cross-origin.enabled | ||
| yarn.scheduler.maximum-allocation-mb | ||
| yarn.scheduler.maximum-allocation-vcores | ||
| yarn.scheduler.minimum-allocation-mb | ||
| yarn.scheduler.minimum-allocation-vcores | ||
| yarn.timeline-service.client.best-effort | ||
| yarn.timeline-service.client.max-retries | ||
| yarn.timeline-service.enabled | ||
| yarn.timeline-service.http-cross-origin.enabled | ||
| yarn.webapp.ui2.enable |