o g-u @spUdZddlZddlZddlmZzddlmZWney(ddl mZYnwddl m Z ddl m ZddlmZdd lmZejrNddlZdd lmZed Zejejd fed <edZedZedejZ edej!ej"BZ#edej!ej"BZ$e dZ%e dZ&e dZ'e dZ(e dZ)e dZ*e dZ+e dZ,e dZ-e dZ.e dZ/e dZ0e dZ1e d Z2e d!Z3e d"Z4e d#Z5e d$Z6e d%Z7e d&Z8e d'Z9e d(Z:e d)Z;e d*Ze d-Z?e d.Z@e d/ZAe d0ZBe d1ZCe d2ZDe d3ZEe d4ZFe d5ZGe d6ZHe d7ZIe d8ZJe d9ZKe d:ZLe d;ZMe d<ZNe d=ZOe d>ZPe d?ZQe d@ZRe dAZSe dBZTe dCZUidDe%dEe=dFe)dGe,dHe5dIe4dJe8dKe>dLe0dMe:dNe1dOe;dPe/dQe9dRe+dSe6dTe-e.e2e3e&e*e'e7e(e) z>=srWzoperators droppedrOccs|]}t|VqdSN)reescaperSxrRrRrV r]cCs t| SrX)lenr\rRrRrVs ra)key token_typereturncCsL|tvrt|Stdtdtdtdtdtdtdtdt dt d t d t d i ||S) Nzbegin of commentzend of commentr5zbegin of statement blockzend of statement blockzbegin of print statementzend of print statementzbegin of line statementzend of line statementztemplate data / textzend of template)reverse_operatorsTOKEN_COMMENT_BEGINTOKEN_COMMENT_END TOKEN_COMMENTTOKEN_LINECOMMENTTOKEN_BLOCK_BEGINTOKEN_BLOCK_ENDTOKEN_VARIABLE_BEGINTOKEN_VARIABLE_ENDTOKEN_LINESTATEMENT_BEGINTOKEN_LINESTATEMENT_END TOKEN_DATA TOKEN_EOFget)rcrRrRrV_describe_token_types" rstokenTokencCs|jtkr|jSt|jS)z#Returns a description of the token.)type TOKEN_NAMEvaluers)rtrRrRrVdescribe_tokens  ryexprcCs8d|vr|dd\}}|tkr|St|S|}t|S)z0Like `describe_token` but for token expressions.rNr)splitrwrs)rzrvrxrRrRrVdescribe_token_exprsr|rxcCstt|S)zsCount the number of newline characters in the string. This is useful for extensions that filter a stream. )r_ newline_refindall)rxrRrRrVcount_newlinesr environmentr cCstj}t|jt||jft|jt||jft|jt||jfg}|j dur8| t|j t d||j f|j durM| t|j t d||j fddt|ddDS)zACompiles all the rules from the environment into a list of rules.Nz ^[ \t\v]*z(?:^|(?<=\S))[^\S\r\n]*cSsg|]}|ddqS)rNrRr[rRrRrV sz!compile_rules..T)reverse)rYrZr_comment_start_stringrfblock_start_stringrjvariable_start_stringrlline_statement_prefixappendrnline_comment_prefixTOKEN_LINECOMMENT_BEGINsorted)rerulesrRrRrV compile_ruless:    rc@sFeZdZdZefdedejeddfddZde d edd fd d Z dS) FailurezjClass that raises a `TemplateSyntaxError` if called. Used by the `Lexer` to specify known errors. messageclsrdNcCs||_||_dSrX)r error_class)selfrrrRrRrV__init__s zFailure.__init__linenofilenamez te.NoReturncCs||j||rX)rr)rrrrRrRrV__call__ szFailure.__call__) __name__ __module__ __qualname____doc__rstrtTyperintrrRrRrRrVrs rc@sXeZdZUeed<eed<eed<defddZdedefdd Zd edefd d Z d S)rurrvrxrdcCt|SrX)ryrrRrRrV__str__z Token.__str__rzcCs2|j|krdSd|vr|dd|j|jgkSdS)zTest a token against a token expression. This can either be a token type or ``'token_type:token_value'``. This can only test against string values and types. TrNrF)rvr{rxrrzrRrRrVtests z Token.testiterablecstfdd|DS)z(Test against multiple token expressions.c3s|]}|VqdSrX)r)rSrzrrRrVr]'r^z!Token.test_any..)any)rrrRrrVtest_any%szToken.test_anyN) rrrr__annotations__rrboolrrrRrRrRrVrus c@s2eZdZdZd ddZd dd Zdefd d ZdS)TokenStreamIteratorz`The iterator for tokenstreams. Iterate over the stream until the eof token is reached. stream TokenStreamrdNcCs ||_dSrX)r)rrrRrRrVr/s zTokenStreamIterator.__init__cCs|SrXrRrrRrRrV__iter__2szTokenStreamIterator.__iter__cCs.|jj}|jtur|jtt|j|SrX)rcurrentrvrqclose StopIterationnextrrtrRrRrV__next__5s    zTokenStreamIterator.__next__)rrrdN)rdr)rrrrrrrurrRrRrRrVr*s   rc@seZdZdZdejedejedejefddZ de fdd Z de fd d Z ede fd d ZdeddfddZdefddZd#deddfddZdedejefddZdede fddZdefddZd$dd Zdedefd!d"ZdS)%rzA token stream is an iterable that yields :class:`Token`\s. The parser however does not iterate over it but calls :meth:`next` to go one token ahead. The current active token is stored as :attr:`current`. generatorr*rcCs>t||_t|_||_||_d|_tdtd|_ t |dS)NFr) iter_iterr_pushedr*rclosedru TOKEN_INITIALrr)rrr*rrRrRrVrFs  zTokenStream.__init__rdcCrrX)rrrRrRrVrTrzTokenStream.__iter__cCst|jp |jjtuSrX)rrrrvrqrrRrRrV__bool__WszTokenStream.__bool__cCs| S)z Are we at the end of the stream?rRrrRrRrVeosZszTokenStream.eosrtNcCs|j|dS)z Push a token back to the stream.N)rrrrRrRrVpush_szTokenStream.pushcCs"t|}|j}||||_|S)zLook at the next token.)rrr)r old_tokenresultrRrRrVlookcs  zTokenStream.lookrncCst|D]}t|qdS)zGot n tokens ahead.N)ranger)rr_rRrRrVskipks  zTokenStream.skiprzcCs|j|r t|SdS)zqPerform the token test and return the token if it matched. Otherwise the return value is `None`. N)rrrrrRrRrVnext_ifps zTokenStream.next_ifcCs||duS)z8Like :meth:`next_if` but only returns `True` or `False`.N)rrrRrRrVskip_ifyszTokenStream.skip_ifcCs\|j}|jr|j|_|S|jjtur,z t|j|_W|Sty+|Y|Sw|S)z|Go one token ahead and return the old one. Use the built-in :func:`next` instead of calling this directly. ) rrpopleftrvrqrrrr)rrvrRrRrVr}s   zTokenStream.__next__cCs&t|jjtd|_td|_d|_dS)zClose the stream.rrRTN)rurrrqrrrrrRrRrVrs  zTokenStream.closecCsn|j|s3t|}|jjturtd|d|jj|j|jtd|dt |j|jj|j|jt |S)z}Expect a given token type and return it. This accepts the same argument as :meth:`jinja2.lexer.Token.test`. z%unexpected end of template, expected rMzexpected token z, got ) rrr|rvrqrrr*rryrrrRrRrVexpects   zTokenStream.expect)r)rdN)rrrrrIterableruOptionalrrrrrrpropertyrrrrrrrrrrrRrRrRrVr@s*   rc CsZ|j|j|j|j|j|j|j|j|j|j |j |j f }t |}|dur+t|t |<}|S)z(Return a lexer which is probably cached.N)rblock_end_stringrvariable_end_stringrcomment_end_stringrr trim_blocks lstrip_blocksnewline_sequencekeep_trailing_newliner rrr )rrblexerrRrRrV get_lexers" rcs$eZdZdZdZfddZZS)OptionalLStripzWA special tuple for marking a point in the state that can have lstrip applied. rRcst||SrX)super__new__)rmemberskwargs __class__rRrVrszOptionalLStrip.__new__)rrrr __slots__r __classcell__rRrRrrVrsrc@sNeZdZUejeed<ejeejedfeje fed<ej eed<dS)_Ruler.tokenscommandN) rrrrPatternrrUnionTuplerrrRrRrRrVrs $rc@seZdZdZdddZdedefd d Z   dd ed ejed ejedejede f ddZ  ddej ej e eefd ejed ejedejefddZ  dd ed ejed ejedejedejej e eeff ddZdS)r a Class that implements a lexer for a given environment. Automatically created by the environment class, usually you don't have to do that. Note that the lexer is not automatically bound to an environment. Multiple environments can share the same lexer. rr rdNc Cs2tj}dtdtjtfdd}tttdttt dtt t dtt t dtttdtttdg}t|}||j}||j}||j}||j} |jrLdnd} |jrU|dnd|_|j|_|j|_d|d |d |d } d | gd d|D} dt|d| dttddt|dtdgtt|d|d|d || d t t!fdt|dt"dfdgt#t|d|d|d || dt$dg|t%t|d| d | t&dg|t't|d|d|d|d || d ttt(dt|dt"dfdgt)t|dt*dg|t+t|dt,t-fdgi|_.dS) Nr\rdcSst|tjtjBSrX)rYcompileMSr`rRrRrVcszLexer.__init__..cz\n?rz[^ \t]z(?Pz(\-|\+|)\s*raw\s*(?:\-z\s*|z))rOcSs"g|] \}}d|d|dqS)z(?P.rootz(.*?)(?:rG#bygroupz.+z (.*?)((?:\+z|\-#popz(.)zMissing end of comment tagz(?:\+z\-z (.*?)((?:z(\-|\+|))\s*endraw\s*(?:\+zMissing end of raw directivez \s*(\n|$)z(.*?)()(?=\n|$))/rYrZrrrr whitespace_reTOKEN_WHITESPACEfloat_re TOKEN_FLOAT integer_re TOKEN_INTEGERname_rerw string_re TOKEN_STRING operator_reTOKEN_OPERATORrrrrrrrlstrip_unless_rerrjoinrrprfrhrgrrjrkrlrmTOKEN_RAW_BEGIN TOKEN_RAW_ENDrnrorriTOKEN_LINECOMMENT_ENDr) rrrr tag_rulesroot_tag_rulesblock_start_re block_end_recomment_end_revariable_end_reblock_suffix_re root_raw_re root_parts_rerRrRrVrs                  zLexer.__init__rxcCst|j|S)z`Replace all newlines with the configured sequence in strings and template data. )r}r%r)rrxrRrRrV_normalize_newlinesYrzLexer._normalize_newlinessourcer*rstatecCs&|||||}t||||||S)z:Calls tokeniter + tokenize and wraps it in a token stream.) tokeniterrwrap)rrr*rr rrRrRrVtokenize_szLexer.tokenizerc csT|D]\}}}|tvr q|}|tkrt}n|tkrt}n|ttfvr$q|tkr.||}nr|dkr5|}nk|t krG|}| sFt d|||nY|t kr}z||dd ddd}WnBty|}zt|dd} t | ||||d }~ww|tkrt|d d d }n|tkrt|d d }n|tkrt|}t|||Vqd S) zThis is called with the stream as returned by `tokenize` and wraps every token in a :class:`Token` and converts the value. keywordzInvalid character in identifierrasciibackslashreplacezunicode-escaperNNrrr)ignored_tokensrnrjrorkrrrprrw isidentifierrrencodedecode Exceptionrr{striprrreplacerrr operatorsru) rrr*rrrt value_strrxrmsgrRrRrVr jsR   z Lexer.wrapccs*t|ddd}|js|ddkr|d=d|}d}d}dg}|dur:|dkr:|d vs3Jd ||d |j|d} t|} g} |j} d} d } | D]\}}}|||}|durbqQ| rl|t t t fvrlqQt |t r6|}t |tr|d}td d|dddD}|dkr|}|t|dd} |g|dd}n/|dkr| dur|ts|dd}|dks|r| ||s|d|g|dd}t|D]W\}}|jtur||||dkr|D]\}}|dur|||fV||d7}nqt|dq||}|s#|tvr)|||fV||d| 7}d} qni|}|tkr|dkrJ| dn@|dkrU| dn5|dkr`| dn*|dvr| sst d|d|||| !}||krt d|d|d||||s|tvr|||fV||d7}|dddk}|"}|dur|dkr|!n*|dkr|D]\}}|dur||n qt|dn|||j|d} n ||krt|d|}n|| krdSt d ||d!||||qO)"aThis method tokenizes the text and returns the tokens in a generator. Use this method if you just want to tokenize a template. .. versionchanged:: 3.0 Only ``\n``, ``\r\n`` and ``\r`` are treated as line breaks. Nrr rrr)variableblockz invalid state_beginTcss|] }|dur|VqdSrXrR)rSgrRrRrVr]sz"Lexer.tokeniter..r?r>rz= wanted to resolve the token dynamically but no group matchedrHrIrFrGrDrE)rIrGrEz unexpected ''z ', expected 'rzA wanted to resolve the new state dynamically but no group matchedz* yielded empty string without stack changezunexpected char z at )#r}r{rrrrr_rmatchrmrkro isinstancetuplegroupsrrrstripcount groupdictrrrlrfindsearch enumeraterritems RuntimeErrorignore_if_emptygrouprrpopend)rrr*rr linesposrstack statetokens source_lengthbalancing_stackrnewlines_stripped line_startingregexr new_statemr%text strip_signstrippedl_posidxrtrbrxr; expected_oppos2rRrRrVr s                                zLexer.tokeniter)rr rdN)NNN)NN)rrrrrrrrrrr rrrIteratorrur r rRrRrRrVr sT y   :)rr rdr )rrrYtypingrastrcollections.abcr ImportError collectionssysr _identifierrr exceptionsrutilsr TYPE_CHECKINGtyping_extensionsterr r MutableMappingrrrrr}rr IGNORECASEVERBOSErr TOKEN_ADD TOKEN_ASSIGN TOKEN_COLON TOKEN_COMMA TOKEN_DIV TOKEN_DOTTOKEN_EQTOKEN_FLOORDIVTOKEN_GT TOKEN_GTEQ TOKEN_LBRACETOKEN_LBRACKET TOKEN_LPARENTOKEN_LT TOKEN_LTEQ TOKEN_MOD TOKEN_MULTOKEN_NE TOKEN_PIPE TOKEN_POW TOKEN_RBRACETOKEN_RBRACKET TOKEN_RPARENTOKEN_SEMICOLON TOKEN_SUB TOKEN_TILDErrrrwrrrjrkrlrmrrrfrgrhrnrorrrirprrqrr,rer_rrr frozensetrr.rrsryr|rrListrr NamedTuplerurrrr$rrr rRrRrRrVs6                &  ") m