o ckF[Ä%ã@stddlZddlmZmZmZddlmZdZGdd„deƒZGdd„deƒZ d d „Z efd d „Z Gd d„deƒZ dS)éN)Ú chunk_hashesÚ tree_hashÚ bytes_to_hex)Úcompute_hashes_from_fileobjic@s0eZdZdZdd„Zdd„Zdd„Zdd „Zd S) Ú _Partitionera¨Convert variable-size writes into part-sized writes Call write(data) with variable sized data as needed to write all data. Call flush() after all data is written. This instance will call send_fn(part_data) as needed in part_size pieces, except for the final part which may be shorter than part_size. Make sure to call flush() to ensure that a short final part results in a final send_fn call. cCs||_||_g|_d|_dS©Nr)Ú part_sizeÚsend_fnÚ_bufferÚ _buffer_size)Úselfrr ©r ú5/usr/lib/python3/dist-packages/boto/glacier/writer.pyÚ__init__1s z_Partitioner.__init__cCsR|dkrdS|j |¡|jt|ƒ7_|j|jkr'| ¡|j|jksdSdS)Nó)r Úappendr ÚlenrÚ _send_part©r Údatar r rÚwrite7s  ÿz_Partitioner.writecCsfd |j¡}t|ƒ|jkr||jd…g|_t|jdƒ|_ng|_d|_|d|j…}| |¡dS)Nrr)Újoinr rrr r )r rÚpartr r rr?s z_Partitioner._send_partcCs|jdkr | ¡dSdSr)r r©r r r rÚflushMs  ÿz_Partitioner.flushN)Ú__name__Ú __module__Ú __qualname__Ú__doc__rrrrr r r rr%s   rc@s<eZdZdZefdd„Zdd„Zdd„Zdd „Zd d „Z d S) Ú _Uploaderz‚Upload to a Glacier upload_id. Call upload_part for each part (in any order) and then close to complete the upload. cCs4||_||_||_||_d|_d|_g|_d|_dS)NrF)ÚvaultÚ upload_idrÚ chunk_sizeÚ archive_idÚ_uploaded_sizeÚ _tree_hashesÚclosed©r r r!rr"r r rrYs z_Uploader.__init__cCs:t|jƒ}||kr|j dg||d¡||j|<dS©Né)rr%Úextend)r ÚindexÚ raw_tree_hashÚ list_lengthr r rÚ_insert_tree_hashes z_Uploader._insert_tree_hashc Csš|jrtdƒ‚tt||jƒƒ}| ||¡t|ƒ}t |¡  ¡}|j |}||t |ƒdf}|j j  |j j|j||||¡}| ¡|jt |ƒ7_dS)z©Upload a part to Glacier. :param part_index: part number where 0 is the first part :param part_data: data to upload corresponding to this part úI/O operation on closed filer)N)r&Ú ValueErrorrrr"r.rÚhashlibÚsha256Ú hexdigestrrr Úlayer1Ú upload_partÚnamer!Úreadr$) r Ú part_indexÚ part_dataÚpart_tree_hashÚ hex_tree_hashÚ linear_hashÚstartÚ content_rangeÚresponser r rr5ks$  ÿüz_Uploader.upload_partcCs,|jrtdƒ‚| ||¡|j|7_dS)aÃSkip uploading of a part. The final close call needs to calculate the tree hash and total size of all uploaded data, so this is the mechanism for resume functionality to provide it without actually uploading the data again. :param part_index: part number where 0 is the first part :param part_tree_hash: binary tree_hash of part being skipped :param part_length: length of part being skipped r/N)r&r0r.r$)r r8r:Ú part_lengthr r rÚ skip_part…s  z_Uploader.skip_partcCsZ|jrdSd|jvrtdƒ‚tt|jƒƒ}|jj |jj|j ||j ¡}|d|_ d|_dS)NzSome parts were not uploaded.Ú ArchiveIdT) r&r%Ú RuntimeErrorrrr r4Úcomplete_multipart_uploadr6r!r$r#)r r;r?r r rÚclose–s  þ  z_Uploader.closeN) rrrrÚ _ONE_MEGABYTErr.r5rArEr r r rrRs   rccs2| |¡}|r| d¡V| |¡}|sdSdS)Nzutf-8)r7Úencode)Úfobjrrr r rÚgenerate_parts_from_fobj¤s €    þrIc Csvt||||ƒ}tt||ƒƒD]%\}}tt||ƒƒ} ||vs#||| kr*| ||¡q| || t|ƒ¡q| ¡|j S)a²Resume upload of a file already part-uploaded to Glacier. The resumption of an upload where the part-uploaded section is empty is a valid degenerate case that this function can handle. In this case, part_hash_map should be an empty dict. :param vault: boto.glacier.vault.Vault object. :param upload_id: existing Glacier upload id of upload being resumed. :param part_size: part size of existing upload. :param fobj: file object containing local data to resume. This must read from the start of the entire upload, not just from the point being resumed. Use fobj.seek(0) to achieve this if necessary. :param part_hash_map: {part_index: part_tree_hash, ...} of data already uploaded. Each supplied part_tree_hash will be verified and the part re-uploaded if there is a mismatch. :param chunk_size: chunk size of tree hash calculation. This must be 1 MiB for Amazon. ) rÚ enumeraterIrrr5rArrEr#) r r!rrHÚ part_hash_mapr"Úuploaderr8r9r:r r rÚresume_file_upload«s ÿ rMc@sleZdZdZefdd„Zdd„Zdd„Zdd „Zd d „Z e d d „ƒZ e dd„ƒZ e dd„ƒZ e dd„ƒZdS)ÚWriterz‡ Presents a file-like object for writing to a Amazon Glacier Archive. The data is written using the multi-part upload API. cCs.t||||ƒ|_t||jƒ|_d|_d|_dS)NFr)rrLrÚ _upload_partÚ partitionerr&Únext_part_indexr'r r rrÒs zWriter.__init__cCs|jrtdƒ‚|j |¡dS)Nr/)r&r0rPrrr r rrØsz Writer.writecCs"|j |j|¡|jd7_dSr()rLr5rQ)r r9r r rrOÝszWriter._upload_partcCs(|jrdS|j ¡|j ¡d|_dS)NT)r&rPrrLrErr r rrEás    z Writer.closecCs| ¡|jjS©N)rErLr#rr r rÚget_archive_idèszWriter.get_archive_idcCs t|jjƒS)z° Returns the current tree hash for the data that's been written **so far**. Only once the writing is complete is the final tree hash returned. )rrLr%rr r rÚcurrent_tree_hashìs zWriter.current_tree_hashcCó|jjS)z¸ Returns the current uploaded size for the data that's been written **so far**. Only once the writing is complete is the final uploaded size returned. )rLr$rr r rÚcurrent_uploaded_sizeöszWriter.current_uploaded_sizecCrUrR)rLr!rr r rr!ózWriter.upload_idcCrUrR)rLr rr r rr rWz Writer.vaultN)rrrrrFrrrOrErSÚpropertyrTrVr!r r r r rrNÍs    rN) r1Úboto.glacier.utilsrrrrrFÚobjectrrrIrMrNr r r rÚs -R ÿ"