-=[ Mr. Bumblebee ]=-
_Indonesia_
σ
ξ:οNc @@ sR d Z d d l m Z d d l Z d d l m Z m Z d e f d YZ d S( s@ ChunkWriter: write compressed data out with a fixed upper bound.i ( t absolute_importN( t Z_FINISHt Z_SYNC_FLUSHt ChunkWriterc B@ sV e Z d Z d Z d
Z d e d Z d Z e d Z d d Z e d Z RS( sy ChunkWriter allows writing of compressed data with a fixed size.
If less data is supplied than fills a chunk, the chunk is padded with
NULL bytes. If more data is supplied, then the writer packs as much
in as it can, but never splits any item it was given.
The algorithm for packing is open to improvement! Current it is:
- write the bytes given
- if the total seen bytes so far exceeds the chunk size, flush.
:cvar _max_repack: To fit the maximum number of entries into a node, we
will sometimes start over and compress the whole list to get tighter
packing. We get diminishing returns after a while, so this limits the
number of times we will try.
The default is to try to avoid recompressing entirely, but setting this
to something like 20 will give maximum compression.
:cvar _max_zsync: Another tunable nob. If _max_repack is set to 0, then you
can limit the number of times we will try to pack more data into a
node. This allows us to do a single compression pass, rather than
trying until we overflow, and then recompressing again.
i i i c C@ st | | _ t j | _ g | _ g | _ d | _ d | _ d | _ d | _ d | _ | | _ | j
d | d S( s5 Create a ChunkWriter to write chunk_size chunks.
:param chunk_size: The total byte count to emit at the end of the
chunk.
:param reserved: How many bytes to allow for reserved data. reserved
data space can only be written to via the write(..., reserved=True).
i t for_sizeN( t
chunk_sizet zlibt compressobjt
compressort bytes_int
bytes_listt
bytes_out_lent unflushed_in_bytest
num_repackt num_zsynct Nonet unused_bytest
reserved_sizet set_optimize( t selfR t reservedt optimize_for_size( ( s7 /usr/lib/python2.7/dist-packages/bzrlib/chunk_writer.pyt __init__b s c C@ s± d | _ | j j t } | j j | | j t | 7_ | j | j k rq t
d | j | j f n | j | j } | r | j j d | n | j | j | f S( s Finish the chunk.
This returns the final compressed chunk, and either None, or the
bytes that did not fit in the chunk.
:return: (compressed_bytes, unused_bytes, num_nulls_needed)
* compressed_bytes: a list of bytes that were output from the
compressor. If the compressed length was not exactly chunk_size,
the final string will be a string of all null bytes to pad this
to chunk_size
* unused_bytes: None, or the last bytes that were added, which we
could not fit.
* num_nulls_needed: How many nulls are padded at the end
s: Somehow we ended up with too much compressed data, %d > %dt N( R R R t flushR R
t appendR t lenR t AssertionErrorR ( R t outt nulls_needed( ( s7 /usr/lib/python2.7/dist-packages/bzrlib/chunk_writer.pyt finishx s c C@ s1 | r t j } n t j } | \ | _ | _ d S( sΐ Change how we optimize our writes.
:param for_size: If True, optimize for minimum space usage, otherwise
optimize for fastest writing speed.
:return: None
N( R t _repack_opts_for_sizet _repack_opts_for_speedt _max_repackt
_max_zsync( R R t opts( ( s7 /usr/lib/python2.7/dist-packages/bzrlib/chunk_writer.pyR s c C@ s« t j } g } | j } | j } x0 | j D]% } | | } | r. | | q. q. W| r | | } | | j t 7} | | n t t t | } | | | f S( sΧ Recompress the current bytes_in, and optionally more.
:param extra_bytes: Optional, if supplied we will add it with
Z_SYNC_FLUSH
:return: (bytes_out, bytes_out_len, alt_compressed)
* bytes_out: is the compressed bytes returned from the compressor
* bytes_out_len: the length of the compressed output
* compressor: An object with everything packed in so far, and
Z_SYNC_FLUSH called.
(
R R R t compressR R R t sumt mapR ( R t extra_bytesR t bytes_outR R$ t accepted_bytesR R ( ( s7 /usr/lib/python2.7/dist-packages/bzrlib/chunk_writer.pyt _recompress_all_bytes_in£ s
c C@ s¦ | j | j k r&