现在的位置: 首页 > 综合 > 正文

策略: Google在数据挖掘中使用Canary Request来试探测Query的破坏性

2013年12月10日 ⁄ 综合 ⁄ 共 1909字 ⁄ 字号 评论关闭

Google在上千个节点上查询内存中的数据并合并结果,其中最严重的问题之一就是:死亡查询(Query of Death)

一个查询可能导致一个程序失败,失败的原因可能是程序bug或者其它因素。这意味着一个单独的查询有可能导致整个集群崩溃。这对于可用性和响应时间来说都是不好的,
因为重新恢复成千台机器的运行时环境需要较长时间。因此,将这样的查询称为死亡查询

新的查询不断进入系统,新的软件也不断部署到系统上,导致死亡查询
问题不可能完全被解决。

两个可行方案:

 * 针对日志的测试
:google重放了一个月的日志,看这些日志中有什么样的查询会导致系统挂起。这种方法有意义,但是死亡查询依然会出现。
 * 发送Canary Request
:
将一个请求发送到一台机器上。如果这个请求被成功执行,那么可以推想,这个请求能在所有机器上成功被执行,所以,可以放行这个请求到集群中。如果这个请求
失败,那么只会导致那一台机器挂起,无伤大雅。然后再选一台机器,重新测试请求来确认这个请求是否真的是死亡查询。如果一个请求失败过一定的次数,那么这
个请求将会被拒绝执行,并且记入日志中,将来用于进一步debug。

这样做的结果是,只有几台机器会挂起,而不是成千台机器同时失败。这个技术很聪明,它利用了已有的扩展技术(scale-out)和连续部署技术
(continuous deplyment)。这种策略应该也可以被其他系统所借鉴。

 

 

 

 

Strategy:
Google Sends Canary Requests into the Data Mine

Google runs queries against thousands of in-memory index nodes in
parallel and then merges the results. One of the interesting problems
with this approach, explains Google's Jeff Dean in this lecture
at Stanford

, is the Query of Death
.

A query can cause a program to fail because of bugs or various other
issues. This means that a single query can take down an entire cluster
of machines, which is not good for availability and response times, as
it takes quite a while for thousands of machines to recover. Thus the
Query of Death. New queries are always coming into the system and when
you are always rolling out new software, it's impossible to completely
get rid of the problem.

Two solutions:

  • Test against logs
    . Google replays a month's
    worth
    of logs to see if any of those queries kill anything. That helps, but
    Queries of Death may still happen.
  • Send a canary request
    .
    A request is sent to one
    machine. If the request succeeds then it will probably succeed on all
    machines, so go ahead with the query. If the request fails the only one
    machine is down, no big deal. Now try the request again on another
    machine to verify that it really is a query of death. If the request
    fails a certain number of times then the request if rejected and logged
    for further debugging.

The result is only a few servers are crashed instead of 1000s. This
is a pretty clever technique, especially given the combined trends of
scale-out and continuous deployment. It could also be a useful strategy
for others.

抱歉!评论已关闭.