Beispiel #1
0
import time
from zope.interface import implementer
from twisted.internet import defer
from foolscap.api import fireEventually
from allmydata import uri
from allmydata.storage.server import si_b2a
from allmydata.hashtree import HashTree
from allmydata.util import mathutil, hashutil, base32, log, happinessutil
from allmydata.util.assertutil import _assert, precondition
from allmydata.codec import CRSEncoder
from allmydata.interfaces import IEncoder, IStorageBucketWriter, \
     IEncryptedUploadable, IUploadStatus, UploadUnhappinessError

from ..util.eliotutil import (
    log_call_deferred, )
"""
The goal of the encoder is to turn the original file into a series of
'shares'. Each share is going to a 'shareholder' (nominally each shareholder
is a different host, but for small grids there may be overlap). The number
of shares is chosen to hit our reliability goals (more shares on more
machines means more reliability), and is limited by overhead (proportional to
numshares or log(numshares)) and the encoding technology in use (zfec permits
only 256 shares total). It is also constrained by the amount of data
we want to send to each host. For estimating purposes, think of 10 shares
out of which we need 3 to reconstruct the file.

The encoder starts by cutting the original file into segments. All segments
except the last are of equal size. The segment size is chosen to constrain
the memory footprint (which will probably vary between 1x and 4x segment
size) and to constrain the overhead (which will be proportional to
log(number of segments)).
Beispiel #2
0
import time
from zope.interface import implements
from twisted.internet import defer
from foolscap.api import fireEventually
from allmydata import uri
from allmydata.storage.server import si_b2a
from allmydata.hashtree import HashTree
from allmydata.util import mathutil, hashutil, base32, log, happinessutil
from allmydata.util.assertutil import _assert, precondition
from allmydata.codec import CRSEncoder
from allmydata.interfaces import IEncoder, IStorageBucketWriter, \
     IEncryptedUploadable, IUploadStatus, UploadUnhappinessError


"""
The goal of the encoder is to turn the original file into a series of
'shares'. Each share is going to a 'shareholder' (nominally each shareholder
is a different host, but for small grids there may be overlap). The number
of shares is chosen to hit our reliability goals (more shares on more
machines means more reliability), and is limited by overhead (proportional to
numshares or log(numshares)) and the encoding technology in use (zfec permits
only 256 shares total). It is also constrained by the amount of data
we want to send to each host. For estimating purposes, think of 10 shares
out of which we need 3 to reconstruct the file.

The encoder starts by cutting the original file into segments. All segments
except the last are of equal size. The segment size is chosen to constrain
the memory footprint (which will probably vary between 1x and 4x segment
size) and to constrain the overhead (which will be proportional to
log(number of segments)).