o ]LbH@sdZddlmZddlZddlZddlmZddlmZddl m Z m Z m Z ddd Z dd d ZGd ddeZeZe jddedZ  dddZdS)a; Algorithm works in the following way. You have two repository: local and remote. They both contains a DAG of changelists. The goal of the discovery protocol is to find one set of node *common*, the set of nodes shared by local and remote. One of the issue with the original protocol was latency, it could potentially require lots of roundtrips to discover that the local repo was a subset of remote (which is a very common case, you usually have few changes compared to upstream, while upstream probably had lots of development). The new protocol only requires one interface for the remote repo: `known()`, which given a set of changelists tells you if they are present in the DAG. The algorithm then works as follow: - We will be using three sets, `common`, `missing`, `unknown`. Originally all nodes are in `unknown`. - Take a sample from `unknown`, call `remote.known(sample)` - For each node that remote knows, move it and all its ancestors to `common` - For each node that remote doesn't know, move it and all its descendants to `missing` - Iterate until `unknown` is empty There are a couple optimizations, first is instead of starting with a random sample of missing, start by sending all heads, in the case where the local repo is a subset, you computed the answer in one round trip. Then you can do something similar to the bisecting strategy used when finding faulty changesets. Instead of random samples, you can try picking nodes that will maximize the number of nodes that will be classified with it (since all ancestors or descendants will be marked as well). )absolute_importN)_nullrev)errorpolicyutilc Csi}t|}t}d}|r_|} | |vrq || d} | |kr%|d9}| |kr8|| |r8t||kr8dS|| || D]} | tkrZ|rM| |vrZ|| | d|| qA|sdSdS)a[update an existing sample to match the expected size The sample is updated with revs exponentially distant from each head of the set. (H~1, H~2, H~4, H~8, etc). If a target size is specified, the sampling will stop once this size is reached. Otherwise sampling will happen until roots of the set are reached. :revs: set of revs we want to discover (if None, assume the whole dag) :heads: set of DAG head revs :sample: a sample to update :parentfn: a callable to resolve parents for a revision :quicksamplesize: optional target size of the samplerN) collectionsdequesetpopleft setdefaultaddlenrappend) revsheadssampleparentfnquicksamplesizedistvisitseenfactorcurrdpr8/usr/lib/python3/dist-packages/mercurial/setdiscovery.py _updatesample9s,       r!TcCsDt||kr|S|rtt||St|}|t|d|S)zreturn a random subset of sample of at most desiredlen item. If randomize is False, though, a deterministic subset is returned. This is meant for integration tests. N)rr randomrlistsort)r desiredlen randomizerrr _limitsample_s r'c@s~eZdZdZdddZddZddZd d Zd d Zd dZ e ddZ ddZ ddZ ddZddZddZddZdS)partialdiscoveryaban object representing ongoing discovery Feed with data from the remote repository, this object keep track of the current set of changeset in various states: - common: revs also known remotely - undecided: revs we don't have information on yet - missing: revs missing remotely (all tracked revisions are known locally) TcCs<||_||_|j|_d|_t|_d|_||_ ||_ dSN) _repo _targetheads changelogincrementalmissingrevs_common _undecidedr missing _childrenmap _respectsizer&)selfrepo targetheads respectsizer&rrr __init__zs  zpartialdiscovery.__init__cCs,|j||jdur|j|jdSdS)zregister nodes known as commonN)r.addbasesr/removeancestorsfrom)r3commonsrrr addcommonss  zpartialdiscovery.addcommonscCs6|jd||j}|r|j||j|dSdS)zregister some nodes as missings%ld::%ldN)r*r undecidedr0updatedifference_update)r3missings newmissingrrr addmissingss  zpartialdiscovery.addmissingscCsXt}t}|D]\}}|r||q||q|r!|||r*||dSdS)z*consume an iterable of (rev, known) tuplesN)r rr;rA)r3rcommonr0revknownrrr addinfos    zpartialdiscovery.addinfocC |jS)z6return True is we have any clue about the remote state)r.hasbasesr3rrr hasinfos zpartialdiscovery.hasinfocCs|jduo|j S)z1True if all the necessary data have been gatheredN)r/rHrrr iscompleteszpartialdiscovery.iscompletecCs*|jdur|jSt|j|j|_|jSr))r/r r.missingancestorsr+rHrrr r<s zpartialdiscovery.undecidedcCsdt|jiS)Nr<)rr<rHrrr statss zpartialdiscovery.statscCrF)z!the heads of the known common set)r. basesheadsrHrrr commonheadss zpartialdiscovery.commonheadscs|jjjjfdd}|S)Ncs|ddS)Nr)rgetrevrr getparentssz3partialdiscovery._parentsgetter..getparents)r*r,index __getitem__)r3rTrrRr _parentsgetters  zpartialdiscovery._parentsgettercCsz|jdur |jjSi|_}|}|j}t|D] }g||<||D]}|tkr*q#||}|dur8||q#q|jSr))r1rVrWr<sortedrgetr)r3children parentrevsrrCprevcrrr _childrengetters      z partialdiscovery._childrengettercCsb|j}t||kr t|St|jd|}t||kr$t|||jdStd||| |d|S)atakes a quick sample of size It is meant for initial sampling and focuses on querying heads and close ancestors of heads. :headrevs: set of head revisions in local DAG to consider :size: the maximum size of the sample heads(%ld)r&N)r) r<rr#r r*rr'r&r!rW)r3headrevssizerrrrr takequicksamples  z partialdiscovery.takequicksamplec Cs|j}t||kr t|S|j}t|d|}|}|}t||||t|d|}| } t|||| |s>J|j sMt |t t|t|}t |||jd}t||kr|t|} t||} |jru|t| | |S| || d| |S)Nr_s roots(%ld)r`)r<rr#r*r rrWcopyr!r^r2maxminr'r&r=r"rr$) r3rarbrr4rr[ revsheads revsroots childrenrevsmoretakefromrrr takefullsamples0    zpartialdiscovery.takefullsampleNT)__name__ __module__ __qualname____doc__r7r;rArErIrJpropertyr<rLrNrWr^rcrlrrrr r(ns     r( discoveryPartialDiscovery)memberdefaultc( st|dd}t}d}|j} | j| j|dur%fdd|D} n dd| D} |dd} | dd } | dd } | r|j rOt | | }t |}n| }| d |d 7}|}|d i}|ddfdd|Di}Wdn1s~wY||}}|durd |d<| tkr|| jgkr| jgd|fS| jgdgfSn|}|d i}Wdn1swY|}|tdg}|D]}|| jkrqz ||WqtjyYqw| r-t|t|kr | d|d|fSt|t| kr-t|r-|tdfdd| D}|d|fS|jj}|dd}|o<|j }|dd}|pG|j }|dd}| jjrWt}nt }||| ||d}| rq|!||"t#||| }|j$tdtdd}|%s|s|&r|r|tdn| d |j'} | }!|rt(| |} n | d!|j)} | }!| | |!}|d 7}|*||+}"| d"||"d#t|ft |}|}|ddfd$d|Di}Wdn 1swYd}|"t#|||%r|,}#t|}$|-| d%||$fd&}%t.|#t.|}&|/d'|%t|#t|&||$|durF||d<|#sh|| jgkrh|rZt0td(|1td)| jhd|fS|| jgk}'fd*d+|#D}#|#|'|fS),aReturn a tuple (common, anyincoming, remoteheads) used to identify missing nodes from or in remote. The audit argument is an optional dictionnary that a caller can pass. it will be updated with extra data about the discovery, this is useful for debug. sdevelsdiscovery.grow-sample.raterNcg|]}|qSrr).0n)clrevrr 6z#findcommonheads..cSsg|]}|tkr|qSrr)rxrCrrr r{8ssdiscovery.exchange-headssdiscovery.sample-size.initialsdiscovery.sample-sizesquery 1; heads rsheadssknownsnodescrwrrrxrQclnoderr r{r|stotal-roundtripsTFssearching for changes sall remote heads known locally s$all local changesets known remotely crwrrr}r~rr r{r|sdiscovery.grow-samplesdiscovery.grow-sample.dynamicsdiscovery.randomizer`s searchingsqueries)unitssampling from both directions staking initial sample staking quick initial sample s2query %i; still undecided: %i, sample size is: %i r<crwrrr}r~rr r{r|s%d total queries in %.4fs sDfound %d common and %d unknown server heads, %d roundtrips in %.4fs s discoverysrepository is unrelateds!warning: repository is unrelated csh|]}|qSrrr}r~rr r|z"findcommonheads..)2floatconfigr timerr,noderCra configbool configintlimitedargumentsr'r#debugcommandexecutor callcommandresulttiprevrnullidstatusrrr LookupErrorrallnoteuirUrust_ext_compatr(pure_partialdiscoveryr;rEzip makeprogressrJrIrlintrcr=rLrNcompleter logAbortwarn)(rlocalremoteabortwhenunrelated ancestorsofaudit samplegrowthstart roundtripsclownheadsinitial_head_exchangeinitialsamplesizefullsamplesizerefheadsfknown srvheadhashesyesno knownsrvheadsr ownheadhashesr grow_sampledynamic_samplehard_limit_sampler&pddiscofullprogress samplefunc targetsizerLrelapsedmsgr0 anyincomingr)rrzr findcommonheadss   4                          )    r)rrm)TNN)rq __future__rr r"i18nrrrrrr r!r'objectr(r importrustrrrrr s& #    &(