redis.conf翻譯與配置(六)【redis6.0.6】

      網(wǎng)友投稿 915 2025-04-07

      學(xué)習(xí)redis的途中,碰上了redis.conf,突發(fā)奇想,想著來進行一波翻譯輸出。

      源碼之前,了無秘密。

      文章目錄

      高級配置

      原文

      譯文

      活動碎片整理

      原文

      譯文

      高級配置

      原文

      ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Lists are also encoded in a special way to save a lot of space. # The number of entries allowed per internal list node can be specified # as a fixed maximum size or a maximum number of elements. # For a fixed maximum size, use -5 through -1, meaning: # -5: max size: 64 Kb <-- not recommended for normal workloads # -4: max size: 32 Kb <-- not recommended # -3: max size: 16 Kb <-- probably not recommended # -2: max size: 8 Kb <-- good # -1: max size: 4 Kb <-- good # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. list-max-ziplist-size -2 # Lists may also be compressed. # Compress depth is the number of quicklist ziplist nodes from *each* side of # the list to *exclude* from compression. The head and tail of the list # are always uncompressed for fast push/pop operations. Settings are: # 0: disable all list compression # 1: depth 1 means "don't start compressing until after 1 node into the list, # going from either the head or tail" # So: [head]->node->node->...->node->[tail] # [head], [tail] will always be uncompressed; inner nodes will compress. # 2: [head]->[next]->node->node->...->node->[prev]->[tail] # 2 here means: don't compress head or head->next or tail->prev or tail, # but compress all nodes between them. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Streams macro node max size / items. The stream data structure is a radix # tree of big nodes that encode multiple items inside. Using this configuration # it is possible to configure how big a single node can be in bytes, and the # maximum number of items it may contain before switching to a new node when # appending new stream entries. If any of the following settings are set to # zero, the limit is ignored, so for instance it is possible to set just a # max entires limit by setting max-bytes to 0 and max-entries to the desired # value. stream-node-max-bytes 4096 stream-node-max-entries 100 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # replica -> replica clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and replica clients, since # subscribers and replicas receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Client query buffers accumulate new commands. They are limited to a fixed # amount by default in order to avoid that a protocol desynchronization (for # instance due to a bug in the client) will lead to unbound memory usage in # the query buffer. However you can configure it here if you have very special # needs, such us huge multi/exec requests or alike. # # client-query-buffer-limit 1gb # In the Redis protocol, bulk requests, that are, elements representing single # strings, are normally limited ot 512 mb. However you can change this limit # here. # # proto-max-bulk-len 512mb # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # Normally it is useful to have an HZ value which is proportional to the # number of clients connected. This is useful in order, for instance, to # avoid too many clients are processed for each background task invocation # in order to avoid latency spikes. # # Since the default HZ value by default is conservatively set to 10, Redis # offers, and enables by default, the ability to use an adaptive HZ value # which will temporary raise when there are many connected clients. # # When dynamic HZ is enabled, the actual configured HZ will be used # as a baseline, but multiples of the configured HZ value will be actually # used as needed once more clients are connected. In this way an idle # instance will use very little CPU time while a busy instance will be # more responsive. dynamic-hz yes # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # When redis saves RDB file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. rdb-save-incremental-fsync yes # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A Special value of 0 means to # decay the counter every time it happens to be scanned. # # lfu-log-factor 10 # lfu-decay-time 1

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42

      43

      44

      45

      46

      47

      48

      49

      50

      51

      52

      53

      54

      55

      56

      57

      58

      59

      60

      61

      62

      63

      64

      65

      66

      67

      68

      69

      70

      71

      72

      73

      74

      75

      76

      77

      78

      79

      80

      81

      82

      83

      84

      85

      86

      87

      88

      89

      90

      91

      92

      93

      94

      95

      96

      97

      98

      99

      100

      101

      102

      103

      104

      105

      106

      107

      108

      109

      110

      111

      112

      113

      114

      115

      116

      117

      118

      119

      120

      121

      122

      123

      124

      125

      126

      127

      128

      129

      130

      131

      132

      133

      134

      135

      136

      137

      138

      139

      140

      141

      142

      143

      144

      145

      146

      147

      148

      149

      150

      151

      152

      153

      154

      155

      156

      157

      158

      159

      160

      161

      162

      163

      redis.conf翻譯與配置(六)【redis6.0.6】

      164

      165

      166

      167

      168

      169

      170

      171

      172

      173

      174

      175

      176

      177

      178

      179

      180

      181

      182

      183

      184

      185

      186

      187

      188

      189

      190

      191

      192

      193

      194

      195

      196

      197

      198

      199

      200

      201

      202

      203

      204

      205

      206

      207

      208

      209

      210

      211

      212

      213

      214

      215

      216

      217

      218

      219

      220

      221

      222

      223

      224

      225

      226

      227

      228

      229

      230

      231

      232

      233

      234

      235

      236

      237

      238

      239

      240

      241

      242

      243

      244

      當(dāng)散列有少量條目且最大條目不超過給定的閾值時,使用內(nèi)存高效數(shù)據(jù)結(jié)構(gòu)對其進行編碼。可以使用以下指令配置這些閾值:

      hash-max-ziplist-entries 512 hash-max-ziplist-value 64

      1

      2

      列表也以一種特殊的方式編碼,以節(jié)省大量空間。每個內(nèi)部列表節(jié)點允許的條目數(shù)可以指定為固定的最大大小或元素的最大數(shù)量。

      對于固定的最大尺寸,使用-5到-1,代表:

      # -5: 最大尺寸: 64 Kb <-- 不建議用于正常工作負載 # -4: 最大尺寸: 32 Kb <-- 不推薦 # -3: 最大尺寸: 16 Kb <-- 可能不推薦 # -2: 最大尺寸: 8 Kb <-- 好 # -1: 最大尺寸: 4 Kb <-- 好

      1

      2

      3

      4

      5

      正數(shù)表示每個列表節(jié)點最多存儲該數(shù)目的元素。

      最高的執(zhí)行選項通常是-2 (8 Kb大小)或-1 (4 Kb大小),但是如果您的用例是唯一的,則根據(jù)需要調(diào)整設(shè)置。

      list也可以被壓縮。

      壓縮深度是指壓縮包中從*each*到*exclude*的快速鏈表節(jié)點的數(shù)量

      對于快速的推/彈出操作,列表的頭和尾總是被解壓。配置如下:

      0:禁用所有列表壓縮 1:在列表中有1個節(jié)點之后才開始壓縮,無論是從頭部還是尾部 所以:`[head]->node->node->...->node->[tail]`,[head], [tail]總會被解壓;而內(nèi)在節(jié)點將被壓縮。 2:[head]->[next]->node->node->...->node->[prev]->[tail] 2在這里的意思是:不要壓縮head 或 head->next 或 tail->prev 或 tail 但是要把中間部分壓縮了。 3:[head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] 你懂得

      1

      2

      3

      4

      5

      6

      7

      8

      sets僅在一種情況下有特殊的編碼方式:當(dāng)一個集合由恰好是整數(shù)(以10為基數(shù),范圍為64位有符號整數(shù))的字符串組成。

      為了使用這種特殊的內(nèi)存節(jié)省編碼,下面的配置設(shè)置設(shè)置了集合大小的限制:

      set-max-intset-entries 512

      1

      和hash、list差不多,sorted sets也有特定編碼方式,也是為了節(jié)省空間。

      這種編碼只在排序集的長度和元素低于以下限制時使用:

      zset-max-ziplist-entries 128 zset-max-ziplist-value 64

      1

      2

      HyperLogLog稀疏表示字節(jié)限制。該限制包括16字節(jié)頭。當(dāng)使用稀疏表示的超日志超過此限制時,它將轉(zhuǎn)換為密集表示。

      大于16000的值是完全無用的,因為在這一點上密集表示更節(jié)省內(nèi)存。

      建議的值為~ 3000,以便在不降低PFADD太多的情況下獲得空間高效編碼的好處,而對于稀疏編碼則是O(N)。如果不關(guān)心CPU,但是關(guān)心空間,并且數(shù)據(jù)集由基數(shù)在0 - 15000范圍內(nèi)的許多超loglog組成,那么這個值可以提高到~ 10000。

      Streams宏節(jié)點最大大小/項。

      數(shù)據(jù)結(jié)構(gòu)是一個大節(jié)點的基數(shù)樹,其中編碼多個項目。使用此配置,可以配置單個節(jié)點的字節(jié)大小,以及在附加新流項時切換到新節(jié)點之前節(jié)點可能包含的最大項數(shù)。如果將以下設(shè)置中的任何一個設(shè)置設(shè)置為零,則會忽略該限制,因此,可以通過將最大字節(jié)設(shè)置為0,將最大條目設(shè)置為所需的值來設(shè)置最大條目限制。

      stream-node-max-bytes 4096 stream-node-max-entries 100

      1

      2

      每100毫秒CPU時間使用1毫秒主動重新緩存,以幫助重新緩存主Redis哈希表(將頂級鍵映射到值的表)。Redis使用的哈希表實現(xiàn)(參見dict.c)執(zhí)行延遲重散列:在重新散列表中運行的操作越多,執(zhí)行的重新散列“步驟”就越多,因此,如果服務(wù)器處于空閑狀態(tài),則重新緩存永遠不會完成,哈希表將使用更多內(nèi)存。

      默認情況下,每秒鐘使用這個毫秒10次,以便主動地重新散列主字典,盡可能釋放內(nèi)存。

      如果不確定:

      如果您有嚴格的延遲要求,并且Redis可以不時地以2毫秒的延遲響應(yīng)查詢,那么在您的環(huán)境中,使用“activerehashing no”。

      如果您沒有這樣的硬性要求,但希望盡快釋放內(nèi)存,請使用“activerehashingyes”。

      客戶機輸出緩沖區(qū)限制可用于強制斷開由于某些原因沒有足夠快地從服務(wù)器讀取數(shù)據(jù)的客戶機(一個常見的原因是發(fā)布/訂閱客戶機不能像發(fā)布者生成消息那樣快地使用消息)。

      對于三種不同類型的客戶端,可以不同地設(shè)置限制:

      # normal -> 正常客戶端,包括監(jiān)視客戶端 # replica -> 副本客戶端 # pubsub -> 客戶端訂閱了至少一個pubsub頻道或模式

      1

      2

      3

      每個客戶端輸出緩沖區(qū)限制指令的語法如下

      # client-output-buffer-limit

      1

      一旦達到硬限制,或者達到軟限制并(連續(xù)地)保持達到指定的秒數(shù),客戶機就會立即斷開連接。

      因此,例如,如果硬限制為32兆字節(jié),軟限制為16兆字節(jié)/10秒,則當(dāng)輸出緩沖區(qū)的大小達到32兆字節(jié)時,客戶端將立即斷開連接,但如果客戶端達到16兆字節(jié)并連續(xù)超過限制10秒,客戶端也將斷開連接。

      默認情況下,普通客戶端不受限制,因為它們不會在沒有請求的情況下接收數(shù)據(jù)(以push方式),而是在請求之后接收數(shù)據(jù),因此,只有異步客戶機可能會出現(xiàn)這樣一種情況,即請求數(shù)據(jù)的速度比讀取數(shù)據(jù)的速度快。

      相反,pubsub和replica客戶端有一個默認限制,因為訂閱服務(wù)器和副本以推送方式接收數(shù)據(jù)。

      可以通過將硬限制或軟限制設(shè)置為零來禁用它們。

      client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60

      1

      2

      3

      客戶端查詢緩沖區(qū)累積新命令。默認情況下,它們被限制為固定數(shù)量,以避免協(xié)議取消同步(例如由于客戶端中的錯誤)將導(dǎo)致查詢緩沖區(qū)中未綁定內(nèi)存的使用。但是,如果您有非常特殊的需要,比如我們巨大的multi/exec請求或類似的請求,您可以在這里配置它。

      # client-query-buffer-limit 1gb

      1

      在Redis協(xié)議中,批量請求,也就是代表單個字符串的元素,通常被限制為512 mb,但是你可以在這里修改這個限制。

      # proto-max-bulk-len 512mb

      1

      Redis調(diào)用一個內(nèi)部函數(shù)來執(zhí)行許多后臺任務(wù),比如在超時時關(guān)閉客戶端連接,清除從未被請求的過期鍵,等等。

      并不是所有的任務(wù)都以相同的頻率執(zhí)行,但是Redis會根據(jù)指定的“hz”值來檢查要執(zhí)行的任務(wù)。

      默認情況下,“hz”設(shè)置為10。提高這個值會在Redis空閑的時候使用更多的CPU,但同時會在有很多鍵同時過期的時候讓Redis反應(yīng)更快,并且可以更精確的處理超時。

      范圍在1到500之間,但是超過100的值通常不是一個好主意。大多數(shù)用戶應(yīng)該使用默認值10,并且只有在需要非常低延遲的環(huán)境中才將此值提高到100。

      hz 10

      1

      通常,有一個與連接的客戶機數(shù)量成比例的赫茲值是有用的。例如,為了避免每次后臺任務(wù)調(diào)用處理過多的客戶機,以避免延遲峰值,這很有用。

      由于默認的HZ值保守地設(shè)置為10,Redis提供并在默認情況下啟用使用自適應(yīng)HZ值的功能,當(dāng)有許多連接的客戶端時,該值將臨時提高。

      當(dāng)啟用dynamic HZ時,實際配置的HZ將用作基線,但是在連接更多客戶端時,實際將根據(jù)需要使用配置的HZ值的倍數(shù)。通過這種方式,空閑實例將使用非常少的CPU時間,而忙碌的實例將具有更好的響應(yīng)能力。

      dynamic-hz yes

      1

      當(dāng)子程序重寫AOF文件時,如果啟用了以下選項,則該文件將每生成32 MB數(shù)據(jù)進行fsync。這對于以更增量的方式將文件提交到磁盤并避免較大的延遲峰值非常有用。

      aof-rewrite-incremental-fsync yes

      1

      Redis LFU驅(qū)逐(見maxmemory設(shè)置)可以調(diào)優(yōu)。但是,最好從默認設(shè)置開始,在研究了如何改進性能以及鍵LFU如何隨時間變化之后才更改它們,這可以通過OBJECT FREQ命令進行檢查。

      Redis-LFU實現(xiàn)中有兩個可調(diào)參數(shù):計數(shù)器對數(shù)因子和計數(shù)器衰減時間。在改變這兩個參數(shù)之前,了解它們的含義是很重要的。

      LFU計數(shù)器每密鑰只有8位,它的最大值是255,所以Redis使用對數(shù)行為的概率增量。給定舊計數(shù)器的值,當(dāng)訪問密鑰時,計數(shù)器按以下方式遞增:

      1、提取0到1之間的隨機數(shù)R。 2、概率P計算為1/(old_value*lfu_log_factor+1)。 3、只有當(dāng)R

      1

      2

      3

      默認的LFU日志系數(shù)為10。以下是頻率計數(shù)器如何隨不同對數(shù)因子的不同訪問次數(shù)而變化的表:

      # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      注:上表是通過運行以下命令獲得的:

      # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo

      1

      2

      注2:計數(shù)器初始值為5,以便給新對象一個累積命中的機會。

      計數(shù)器衰變時間是按鍵計數(shù)器除以2(如果其值小于等于10,則遞減)必須經(jīng)過的時間(以分鐘為單位)。

      LFU衰退時間的默認值為1。一個特殊值0意味著每次掃描計數(shù)器時都會衰減計數(shù)器。

      活動碎片整理

      原文

      ########################### ACTIVE DEFRAGMENTATION ####################### # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra for Redis 4.0 this process can happen at runtime # in an "hot" way, while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3. Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean it is # a good idea to leave the defaults untouched. # Enabled active defragmentation # activedefrag no # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage, to be used when the lower # threshold is reached # active-defrag-cycle-min 1 # Maximal effort for defrag in CPU percentage, to be used when the upper # threshold is reached # active-defrag-cycle-max 25 # Maximum number of set/hash/zset/list fields that will be processed from # the main dictionary scan # active-defrag-max-scan-fields 1000 # Jemalloc background thread for purging will be enabled by default jemalloc-bg-thread yes # It is possible to pin different threads and processes of Redis to specific # CPUs in your system, in order to maximize the performances of the server. # This is useful both in order to pin different Redis threads in different # CPUs, but also in order to make sure that multiple Redis instances running # in the same host will be pinned to different CPUs. # # Normally you can do this using the "taskset" command, however it is also # possible to this via Redis configuration directly, both in Linux and FreeBSD. # # You can pin the server/IO threads, bio threads, aof rewrite child process, and # the bgsave child process. The syntax to specify the cpu list is the same as # the taskset command: # # Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      12

      13

      14

      15

      16

      17

      18

      19

      20

      21

      22

      23

      24

      25

      26

      27

      28

      29

      30

      31

      32

      33

      34

      35

      36

      37

      38

      39

      40

      41

      42

      43

      44

      45

      46

      47

      48

      49

      50

      51

      52

      53

      54

      55

      56

      57

      58

      59

      60

      61

      62

      63

      64

      65

      66

      67

      68

      69

      70

      71

      72

      73

      74

      75

      76

      77

      78

      79

      80

      81

      82

      83

      84

      85

      86

      87

      88

      89

      90

      91

      什么是活動碎片整理?

      活動(在線)碎片整理允許Redis服務(wù)器壓縮空間之間的小的分配和回收的數(shù)據(jù)在內(nèi)存,從而允許回收內(nèi)存。

      碎片化是一個自然的過程,它發(fā)生在每個分配器(幸運的是Jemalloc)和某些工作負載上。通常需要重新啟動服務(wù)器以降低碎片,或者至少刷新所有數(shù)據(jù)并重新創(chuàng)建。不過,多虧了OranAgra for Redis 4.0實現(xiàn)的這一特性,這個過程可以在服務(wù)器運行時以“熱”的方式在運行時發(fā)生。

      基本上,當(dāng)碎片超過某個級別(參見下面的配置選項)時,Redis將開始利用某些特定的Jemalloc特性在連續(xù)內(nèi)存區(qū)域中創(chuàng)建值的新副本(以便了解某個分配是否導(dǎo)致碎片并將其分配到更好的位置),同時,會發(fā)布舊的數(shù)據(jù)拷貝。對所有鍵增量重復(fù)此過程將導(dǎo)致碎片返回到正常值。

      需要了解的重要事情:

      1、此功能在默認情況下是禁用的,并且只有在您編譯Redis以使用Redis源代碼附帶的Jemalloc副本時才起作用。 這是Linux版本的默認設(shè)置。 2. 如果沒有碎片問題,就不需要啟用這個特性。 3. 一旦你經(jīng)歷了碎片化,你可以在需要的時候使用命令“CONFIG SET activedefrag yes”來啟用這個特性。

      1

      2

      3

      4

      配置參數(shù)能夠微調(diào)碎片整理進程的行為。如果你不確定它們的意思,保持默認值不變是個好主意。

      # Enabled active defragmentation # activedefrag no

      1

      2

      可以將Redis的不同線程和進程固定到系統(tǒng)中的特定cpu上,以最大限度地提高服務(wù)器的性能。

      這有助于將不同的Redis線程固定在不同的cpu上,也可以確保在同一主機上運行的多個Redis實例被固定到不同的cpu上。

      通常,您可以使用“taskset”命令來完成此操作,但是在Linux和FreeBSD中,也可以通過Redis配置直接實現(xiàn)。

      您可以固定服務(wù)器/IO線程、bio線程、aof重寫子進程和bgsave子進程。指定cpu列表的語法與taskset命令相同:

      # Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11

      1

      2

      3

      4

      5

      6

      7

      8

      9

      10

      11

      告一段落咯

      Redis 機器翻譯

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。

      版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實的內(nèi)容,請聯(lián)系我們jiasou666@gmail.com 處理,核實后本網(wǎng)站將在24小時內(nèi)刪除侵權(quán)內(nèi)容。

      上一篇:wps如何把字豎過來(wps如何把字豎起來)
      下一篇:如何將背景變成白色
      相關(guān)文章
      亚洲精品色在线网站| 亚洲国产第一页www| 亚洲国产精品SSS在线观看AV| 色噜噜噜噜亚洲第一| 在线综合亚洲欧洲综合网站| 亚洲一级毛片免观看| 2022年亚洲午夜一区二区福利| 亚洲精品无码久久久影院相关影片| 亚洲精品无码专区久久同性男| 国产产在线精品亚洲AAVV| 久久亚洲AV成人无码国产电影| 亚洲精品一卡2卡3卡四卡乱码| 亚洲日韩亚洲另类激情文学| 亚洲一级大黄大色毛片| 亚洲国产精品久久丫 | 亚洲欧洲尹人香蕉综合| 亚洲视频小说图片| 亚洲黄色三级网站| 亚洲色大成网站www永久| 91亚洲导航深夜福利| 亚洲网红精品大秀在线观看| 亚洲永久中文字幕在线| 久久精品国产亚洲AV无码麻豆| 久久国产亚洲高清观看| 亚洲黄色网址大全| 亚洲国产精品久久网午夜| 亚洲中文字幕久久无码| 亚洲一卡2卡4卡5卡6卡在线99 | 亚洲一级毛片视频| 亚洲综合偷自成人网第页色| 亚洲熟妇少妇任你躁在线观看| 亚洲一本一道一区二区三区| 亚洲人成自拍网站在线观看| 亚洲AV无码XXX麻豆艾秋| 国产精品无码亚洲精品2021 | 亚洲激情在线视频| 亚洲人成影院在线| 亚洲精品在线不卡| 在线免费观看亚洲| 精品久久亚洲中文无码| 亚洲AV日韩AV永久无码下载 |