Hadoop NameNode的ZKFC機(jī)制
轉(zhuǎn)載原貼?https://www.cnblogs.com/cssdongl/p/6046397.html
NameNode的HA可以個(gè)人認(rèn)為簡單分為共享editLog機(jī)制和ZKFC對(duì)NameNode狀態(tài)的控制
一般導(dǎo)致NameNode切換的原因
ZKFC的作用是什么?如何判斷一個(gè)NN是否健康
NameNode HA是如何實(shí)現(xiàn)的?
NameNode因?yàn)閿嚯妼?dǎo)致不能切換的原理,怎樣進(jìn)行恢復(fù)
隨著集群規(guī)模的變大和任務(wù)量變多,NameNode的壓力會(huì)越來越大,一些默認(rèn)參數(shù)已經(jīng)不能滿足集群的日常需求,除此之外,異常的Job在短時(shí)間內(nèi)創(chuàng)建和刪除大量文件,引起NN節(jié)點(diǎn)頻繁更新內(nèi)存的數(shù)據(jù)結(jié)構(gòu)從而導(dǎo)致RPC的處理時(shí)間變長,CallQueue里面的RpcCall堆積,甚至嚴(yán)重的情況下打滿CallQueue,導(dǎo)致NameNode響應(yīng)變慢,甚至無響應(yīng),ZKFC的HealthMonitor監(jiān)控自己的NN異常時(shí),則會(huì)斷開與ZooKeeper的鏈接,從而釋放鎖,另外一個(gè)NN上的ZKFC進(jìn)行搶鎖進(jìn)行Standby到Active狀態(tài)的切換。這是一般引起的切換的流程。
當(dāng)然,如果你是手動(dòng)去切換這也是可以的,當(dāng)Active主機(jī)出現(xiàn)異常時(shí),有時(shí)候則需要在必要的時(shí)間內(nèi)進(jìn)行切換。
在正常的情況下,ZKFC的HealthMonitor主要是監(jiān)控NameNode主機(jī)上的磁盤還是否可用(空間),我們都知道,NameNode負(fù)責(zé)維護(hù)集群上的元數(shù)據(jù)信息,當(dāng)磁盤不可用的時(shí)候,NN就該進(jìn)行切換了。
/** ???*?Return?true?if?disk?space?is?available?on?at?least?one?of?the?configured ???*?redundant?volumes,?and?all?of?the?configured?required?volumes. ???*? ???*?@return?True?if?the?configured?amount?of?disk?space?is?available?on?at ???*?????????least?one?redundant?volume?and?all?of?the?required?volumes,?false ???*?????????otherwise. ???*/??public?boolean?hasAvailableDiskSpace()?{ ????return?NameNodeResourcePolicy.areResourcesAvailable(volumes.values(), ????????minimumRedundantVolumes); ??}
除了可用狀態(tài)(?SERVICE_HEALTHY?)之外,還有?SERVICE_UNHEALTHY?(磁盤空間不可用),?SERVICE_NOT_RESPONDING?(其他的一些情況)狀態(tài),在這兩個(gè)狀態(tài)中,它都認(rèn)為NN是不健康的。
我們前面說到,ZKFC是如何判斷NN是否健康,接下來當(dāng)NN處于非健康狀態(tài)時(shí),NameNode是如何進(jìn)行切換的呢?
在ZKFailoverController這個(gè)類中,實(shí)行了兩個(gè)重要的Callbacks函數(shù),一個(gè)叫ElectorCallbacks,另一個(gè)叫HealthCallbacks,顧名思義就是選舉和健康檢查用的回調(diào)函數(shù),其中還有兩個(gè)重要的組成部分?elector(ActiveStandbyElector)?,healthMonitor(HealthMonitor)?,總體的就如上圖所示。
ElectorCallbacks:
/** ???*?Callbacks?from?elector ???*/ ??class?ElectorCallbacks?implements?ActiveStandbyElectorCallback?{????@Override ????public?void?becomeActive()?throws?ServiceFailedException?{ ??????ZKFailoverController.this.becomeActive(); ????}????@Override ????public?void?becomeStandby()?{ ??????ZKFailoverController.this.becomeStandby(); ????} ... }
HealthCallbacks:
/** ???*?Callbacks?from?HealthMonitor ???*/ ??class?HealthCallbacks?implements?HealthMonitor.Callback?{????@Override ????public?void?enteredState(HealthMonitor.State?newState)?{ ??????setLastHealthState(newState); ??????recheckElectability(); ????} ??}
對(duì)于HealthMonitor來說,在ZKFC進(jìn)程啟動(dòng)的時(shí)候,就已經(jīng)將HealthCallbacks注冊進(jìn)去了,HealthMonitor都會(huì)定期的檢查NameNode是否健康,我們可以通過監(jiān)控ha.health-monitor.check-interval.ms?去設(shè)置監(jiān)控的間隔時(shí)間和通過參數(shù)?ha.health-monitor.rpc-timeout.ms?設(shè)置timeout時(shí)間, 當(dāng)集群變大的時(shí)候,需要適當(dāng)?shù)脑O(shè)置改值,讓ZKFC的HealthMonitor沒那么“敏感” 。
ZKFC通過RPC調(diào)用監(jiān)控NN進(jìn)程,當(dāng)出現(xiàn)異常時(shí),則進(jìn)入不同的處理邏輯,以下是簡化的代碼:
private?void?doHealthChecks()?throws?InterruptedException?{????while?(shouldRun)?{????? ??????try?{ ????????status?=?proxy.getServiceStatus(); ????????proxy.monitorHealth(); ????????healthy?=?true; ??????}?catch?(HealthCheckFailedException?e)?{ ???????... ????????enterState(State.SERVICE_UNHEALTHY); ??????}?catch?(Throwable?t)?{ ???????... ????????enterState(State.SERVICE_NOT_RESPONDING); ????????Thread.sleep(sleepAfterDisconnectMillis);????????return; ??????} ??????... }
回調(diào)函數(shù)就是這么起作用啦,那么回調(diào)函數(shù)做了什么呢?總的來說,如果NN健康(SERVICE_HEALTHY?)就加入選舉,如果不健康就退出選舉(SERVICE_UNHEALTHY?,?SERVICE_NOT_RESPONDING?)
case?SERVICE_UNHEALTHY: ????????case?SERVICE_NOT_RESPONDING: ??????????LOG.info("Quitting?master?election?for?"?+?localTarget?+??????????????"?and?marking?that?fencing?is?necessary"); ??????????elector.quitElection(true);??????????break;
說到退出選舉就關(guān)系到?elector(ActiveStandbyElector)?了, true代表如果NN從Actice變?yōu)镾tandby出現(xiàn)異常是要去fence的 ,這就是為啥NN會(huì)掛掉的原因之一
如何退出選舉?就是close zkClient的鏈接,讓ZooKeeper上面的維持的選舉鎖消失
void?terminateConnection()?{????if?(zkClient?==?null)?{??????return; ????} ????LOG.debug("Terminating?ZK?connection?for?"?+?this); ????ZooKeeper?tempZk?=?zkClient; ????...????try?{ ??????tempZk.close(); ????}?catch(InterruptedException?e)?{ ??????LOG.warn(e); ????} ???... ??}
對(duì)于ActiveStandbyElector來說,他有個(gè)WatcherWithClientRef類專門用來監(jiān)聽ZooKeeper上的的znode的事件變化,當(dāng)事件變化時(shí),就會(huì)調(diào)用ActiveStandbyElector的processWatchEvent的方法
watcher?=?new?WatcherWithClientRef(); ZooKeeper?zk?=?new?ZooKeeper(zkHostPort,?zkSessionTimeout,?watcher);
和
/** ???*?Watcher?implementation?which?keeps?a?reference?around?to?the ???*?original?ZK?connection,?and?passes?it?back?along?with?any ???*?events. ???*/ ??private?final?class?WatcherWithClientRef?implements?Watcher?{ ... ????@Override????????public?void?process(WatchedEvent?event)?{ ??????????hasReceivedEvent.countDown();??????????try?{ ????????????hasSetZooKeeper.await(zkSessionTimeout,?TimeUnit.MILLISECONDS); ????????????ActiveStandbyElector.this.processWatchEvent( ????????????????zk,?event); ??????????}?catch?(Throwable?t)?{ ????????????fatalError(????????????????"Failed?to?process?watcher?event?"?+?event?+?":?"?+ ????????????????StringUtils.stringifyException(t)); ??????????} ????????} ... }
在ActiveStandbyElector的processWatchEvent方法中,?處理來自不同事件的邏輯?,重新加入選舉?或者?繼續(xù)監(jiān)控znode的變化?,當(dāng)另外一個(gè)ZKFC監(jiān)控到事件變化得時(shí)候,就去搶鎖,搶鎖實(shí)質(zhì)上就是創(chuàng)建znode的過程,而且創(chuàng)建的是CreateMode.EPHEMERAL類型?的,所以,當(dāng)HealthMonitor監(jiān)控到NN不健康時(shí),就會(huì)斷開連接,節(jié)點(diǎn)就會(huì)消失,watcher就會(huì)監(jiān)控到NodeDeleted事件,進(jìn)行創(chuàng)建節(jié)點(diǎn)。
switch?(eventType)?{??????case?NodeDeleted: ????????if?(state?==?State.ACTIVE)?{ ??????????enterNeutralMode(); ????????} ????????joinElectionInternal();????????break;??????case?NodeDataChanged: ????????monitorActiveStatus();????????break;
又因?yàn)锳ctiveStandbyElector實(shí)現(xiàn)了StatCallback接口,當(dāng)節(jié)點(diǎn)創(chuàng)建成功時(shí),就會(huì)回調(diào)processResult方法看是否創(chuàng)建成功, 如果創(chuàng)建成功則去檢查zkBreadCrumbPath是否存在之前的Active節(jié)點(diǎn),如果存在,則調(diào)用RPC讓其變?yōu)镾tandby,看能否轉(zhuǎn)變成功,否則則SSH過去fence掉NN進(jìn)程。 ,保持Active節(jié)點(diǎn)只有一個(gè),并且恢復(fù)正常服務(wù)
ActiveNN斷電,網(wǎng)絡(luò)異常,負(fù)載過高或者機(jī)器出現(xiàn)異常無法連接,Standby NN無法轉(zhuǎn)化為Active,使得HA集群無法對(duì)外服務(wù),原因是Active NN節(jié)點(diǎn)在斷電和不能服務(wù)的情況下,zknode上保存著ActiveBreadCrumb, ActiveStandbyElectorLock兩個(gè)Active NN的信息,ActiveStandbyElectorLock由于Active NN出現(xiàn)異常斷開,Standby NN去搶鎖的時(shí)候就會(huì)去檢查ActiveBreadCrumb是否有上一次的Active NN節(jié)點(diǎn),如果有,就會(huì)就會(huì)嘗試讓Active NN變?yōu)镾tandby NN,自己轉(zhuǎn)化為Active NN,但是由于調(diào)用出現(xiàn)異常,所以會(huì)采用ssh的方式去Fence之前的Active NN,因?yàn)闄C(jī)器始終連接不上,所以無法確保old active NN變?yōu)镾tandby NN,自己也無法變?yōu)锳ctive NN,所以還是保持Standby狀態(tài),避免出現(xiàn)腦裂問題。
解決方案是確定Active關(guān)機(jī)的情況下重新hdfs zkfc -formatZK就可以了。
NN GC或者在壓力大的情況下可以調(diào)整GC算法和增加NameNode節(jié)點(diǎn)的線程數(shù),加快NN對(duì)請(qǐng)求的處理速度,也可以分離節(jié)點(diǎn)的端口?dfs.namenode.rpc-address.ns1.nn2?和?dfs.namenode.servicerpc-address.ns1.nn2?分離client和datanode節(jié)點(diǎn)等服務(wù)類型的請(qǐng)求,進(jìn)行分擔(dān)壓力,也可以適當(dāng)?shù)恼{(diào)整ZKFC的監(jiān)控timeout的時(shí)間等等。
EI企業(yè)智能
版權(quán)聲明:本文內(nèi)容由網(wǎng)絡(luò)用戶投稿,版權(quán)歸原作者所有,本站不擁有其著作權(quán),亦不承擔(dān)相應(yīng)法律責(zé)任。如果您發(fā)現(xiàn)本站中有涉嫌抄襲或描述失實(shí)的內(nèi)容,請(qǐng)聯(lián)系我們jiasou666@gmail.com 處理,核實(shí)后本網(wǎng)站將在24小時(shí)內(nèi)刪除侵權(quán)內(nèi)容。