def extract_xpak(tbz2file, tmpdir = None): """ docstring_title @param tbz2file: @type tbz2file: @keyword tmpdir: @type tmpdir: @return: @rtype: """ # extract xpak content tmp_fd, tmp_path = const_mkstemp( prefix="entropy.spm.portage.extract_xpak") os.close(tmp_fd) try: done = suck_xpak(tbz2file, tmp_path) if not done: return None return unpack_xpak(tmp_path, tmpdir = tmpdir) finally: try: # unpack_xpak already removes it os.remove(tmp_path) except OSError as err: if err.errno != errno.ENOENT: raise
def _do_trigger_call_ext_generic(self): entropy_sh = "#!%s" % (etpConst['trigger_sh_interpreter'],) entropy_sh = const_convert_to_rawstring(entropy_sh) tmp_fd, tmp_path = const_mkstemp(prefix="_do_trigger_call_ext_generic") with os.fdopen(tmp_fd, "ab+") as tr_f: tr_f.write(const_convert_to_rawstring(self._pkgdata['trigger'])) tr_f.flush() tr_f.seek(0) interpreter = tr_f.read(128) tr_f.seek(0) shell_intr = False if interpreter.startswith(entropy_sh): shell_intr = True try: if shell_intr: exc = self._EntropyShSandbox(self._entropy) else: exc = self._EntropyPySandbox(self._entropy) return exc.run(self._phase, self._pkgdata, tmp_path) finally: if shell_intr: try: os.remove(tmp_path) except OSError: pass
def resize_image(max_width, image_path, final_image_path): dirname = os.path.dirname(final_image_path) tmp_fd, new_image_path = const_mkstemp( dir=dirname, prefix="resize_image") os.close(tmp_fd) shutil.copy2(image_path, new_image_path) img = Gtk.Image() img.set_from_file(new_image_path) img_buf = img.get_pixbuf() w, h = img_buf.get_width(), img_buf.get_height() if w > max_width: # resize pix new_w = max_width new_h = new_w * h / w img_buf = img_buf.scale_simple(int(new_w), int(new_h), GdkPixbuf.InterpType.BILINEAR) try: img_buf.save(new_image_path, "png") except GObject.GError: # libpng issue? try jpeg img_buf.save(new_image_path, "jpeg") del img_buf del img os.rename(new_image_path, final_image_path)
def _show_license(self, uri, license_apps): """ Show selected License to User. """ tmp_fd, tmp_path = None, None try: license_text = None # get the first repo with valid license text repos = set([x.get_details().channelname for \ x in license_apps]) if not repos: return with self._entropy.rwsem().reader(): for repo_id in repos: repo = self._entropy.open_repository(repo_id) license_text = repo.retrieveLicenseText(uri) if license_text is not None: break if license_text is not None: tmp_fd, tmp_path = const_mkstemp(suffix=".txt") try: license_text = const_convert_to_unicode( license_text, enctype=etpConst['conf_encoding']) except UnicodeDecodeError: license_text = const_convert_to_unicode( license_text) with entropy.tools.codecs_fdopen( tmp_fd, "w", etpConst['conf_encoding']) as tmp_f: tmp_f.write("License: %s\n" % ( uri,)) apps = self._licenses.get(uri, []) if apps: tmp_f.write("Applications:\n") for app in apps: tmp_f.write("\t%s\n" % (app.name,)) if apps: tmp_f.write("\n") tmp_f.write("-" * 79 + "\n") tmp_f.write(license_text) tmp_f.flush() else: const_debug_write( __name__, "LicensesNotificationBox._show_license: " "not available" ) finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass # leaks, but xdg-open is async if tmp_path is not None: open_url(tmp_path)
def _get_license_text(license_name, repository_id): repo = entropy_client.open_repository(repository_id) text = repo.retrieveLicenseText(license_name) tmp_fd, tmp_path = const_mkstemp(prefix="LICENSES.") # text is binary string, do not use codecs.open() with os.fdopen(tmp_fd, "wb") as tmp_f: tmp_f.write(text) return tmp_path
def generate_content_file(content, package_id = None, filter_splitdebug = False, splitdebug = None, splitdebug_dirs = None): """ Generate a file containing the "content" metadata, reading by content list or iterator. Each item of "content" must contain (path, ftype). Each item shall be written to file, one per line, in the following form: "[<package_id>|]<ftype>|<path>". The order of the element in "content" will be kept. """ tmp_dir = os.path.join( etpConst['entropyunpackdir'], "__generate_content_file_f") try: os.makedirs(tmp_dir, 0o755) except OSError as err: if err.errno != errno.EEXIST: raise tmp_fd, tmp_path = None, None generated = False try: tmp_fd, tmp_path = const_mkstemp( prefix="PackageContent", dir=tmp_dir) with FileContentWriter(tmp_fd) as tmp_f: for path, ftype in content: if filter_splitdebug and not splitdebug: # if filter_splitdebug is enabled, this # code filters out all the paths starting # with splitdebug_dirs, if splitdebug is # disabled for package. _skip = False for split_dir in splitdebug_dirs: if path.startswith(split_dir): _skip = True break if _skip: continue tmp_f.write(package_id, path, ftype) generated = True return tmp_path finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass if tmp_path is not None and not generated: try: os.remove(tmp_path) except (OSError, IOError): pass
def _showdiff(self, entropy_client, fromfile, tofile): args = ["diff", "-Nu", "'"+fromfile+"'", "'"+tofile+"'"] output = getoutput(' '.join(args)).split("\n") coloured = [] for line in output: if line.startswith("---"): line = darkred(line) elif line.startswith("+++"): line = darkgreen(line) elif line.startswith("@@"): line = brown(line) elif line.startswith("-"): line = blue(line) elif line.startswith("+"): line = darkgreen(line) coloured.append(line + "\n") fd, tmp_path = None, None try: fd, tmp_path = const_mkstemp( suffix="equo.conf.showdiff") with os.fdopen(fd, "w") as f: f.writelines(coloured) f.flush() finally: if fd is not None: try: os.close(fd) except OSError: pass print("") pager = os.getenv("PAGER", "/usr/bin/less") if os.path.lexists(pager): if pager == "/usr/bin/less": args = [pager, "-R", "--no-init", "--QUIT-AT-EOF", tmp_path] else: args = [pager, tmp_path] else: args = ["/bin/cat", tmp_path] try: subprocess.call(args) except OSError as err: if err.errno != errno.ENOENT: raise args = ["/bin/cat", tmp_path] subprocess.call(args) os.remove(tmp_path) if output == ['']: return [] return output
def upload_many(self, load_path_list, remote_dir): def do_rm(path): try: os.remove(path) except OSError: pass # first of all, copy files renaming them tmp_file_map = {} try: for load_path in load_path_list: tmp_fd, tmp_path = const_mkstemp( suffix = EntropyUriHandler.TMP_TXC_FILE_EXT, prefix = "._%s" % (os.path.basename(load_path),)) os.close(tmp_fd) shutil.copy2(load_path, tmp_path) tmp_file_map[tmp_path] = load_path args = [EntropySshUriHandler._TXC_CMD] c_args, remote_str = self._setup_common_args(remote_dir) args += c_args args += ["-B", "-P", str(self.__port)] args += sorted(tmp_file_map.keys()) args += [remote_str] upload_sts = self._fork_cmd(args) == os.EX_OK if not upload_sts: return False # atomic rename rename_fine = True for tmp_path, orig_path in tmp_file_map.items(): tmp_file = os.path.basename(tmp_path) orig_file = os.path.basename(orig_path) tmp_remote_path = os.path.join(remote_dir, tmp_file) remote_path = os.path.join(remote_dir, orig_file) self.output( "<-> %s %s %s" % ( brown(tmp_file), teal("=>"), darkgreen(orig_file), ), header = " ", back = True ) rc = self.rename(tmp_remote_path, remote_path) if not rc: rename_fine = False finally: for path in tmp_file_map.keys(): do_rm(path) return rename_fine
def generate_dot(self): """ Generate RAW dot file that can be used to feed graphviz """ graph = self._generate_pydot() tmp_fd, tmp_path = const_mkstemp(prefix="entropy.graph", suffix=".dot") os.close(tmp_fd) graph.write_raw(tmp_path) const_setup_file(tmp_path, etpConst['entropygid'], 0o644) return tmp_path
def generate_png(self): """ Generate a PNG from current Graph content. """ graph = self._generate_pydot() tmp_fd, tmp_path = const_mkstemp(prefix="entropy.graph", suffix=".png") os.close(tmp_fd) graph.write_png(tmp_path) const_setup_file(tmp_path, etpConst['entropygid'], 0o644) return tmp_path
def _exec_cmd(self, args): fd, tmp_path = const_mkstemp(prefix="entropy.transceivers.ssh_plug") fd_err, tmp_path_err = const_mkstemp( prefix="entropy.transceivers.ssh_plug") try: with os.fdopen(fd, "wb") as std_f: with os.fdopen(fd_err, "wb") as std_f_err: proc = self._subprocess.Popen(args, stdout = std_f, stderr = std_f_err) exec_rc = proc.wait() enc = etpConst['conf_encoding'] with codecs.open(tmp_path, "r", encoding=enc) as std_f: output = std_f.read() with codecs.open(tmp_path_err, "r", encoding=enc) as std_f: error = std_f.read() finally: os.remove(tmp_path) os.remove(tmp_path_err) return exec_rc, output, error
def lock(self, remote_path): # The only atomic operation on FTP seems to be mkdir() # But there is no actual guarantee because it really depends # on the server implementation. # FTP is very old, got to live with it. self.__connect_if_not() remote_path_lock = os.path.join( os.path.dirname(remote_path), "." + os.path.basename(remote_path) + ".lock") remote_ptr = os.path.join(self.__ftpdir, remote_path) remote_ptr_lock = os.path.join(self.__ftpdir, remote_path_lock) const_debug_write(__name__, "lock(): remote_ptr: %s, lock: %s" % ( remote_ptr, remote_ptr_lock,)) try: self._mkdir(remote_ptr_lock) except self.ftplib.error_perm as e: return False # now we can create the lock file reliably tmp_fd, tmp_path = None, None try: tmp_fd, tmp_path = const_mkstemp(prefix="entropy.txc.ftp.lock") # check if remote_ptr is already there if self._is_path_available(remote_ptr): return False with open(tmp_path, "rb") as f: rc = self.__ftpconn.storbinary( "STOR " + remote_ptr, f) done = rc.find("226") != -1 if not done: # wtf? return False return True finally: if tmp_fd is not None: os.close(tmp_fd) if tmp_path is not None: os.remove(tmp_path) # and always remove the directory created with _mkdir() # we hope that, if we were able to create it, we're also # able to remove it. self._rmdir(remote_ptr_lock)
def _encode_multipart_form(self, params, file_params, boundary): """ Encode parameters and files into a valid HTTP multipart form data. NOTE: this method loads the whole file in RAM, HTTP post doesn't work well for big files anyway. """ def _cast_to_str(value): if value is None: return const_convert_to_rawstring("") elif isinstance(value, (int, float, long)): return const_convert_to_rawstring(value) elif isinstance(value, (list, tuple)): return repr(value) return value tmp_fd, tmp_path = const_mkstemp(prefix="_encode_multipart_form") tmp_f = os.fdopen(tmp_fd, "ab+") tmp_f.truncate(0) crlf = '\r\n' for key, value in params.items(): tmp_f.write("--" + boundary + crlf) tmp_f.write("Content-Disposition: form-data; name=\"%s\"" % ( key,)) tmp_f.write(crlf + crlf + _cast_to_str(value) + crlf) for key, (f_name, f_obj) in file_params.items(): tmp_f.write("--" + boundary + crlf) tmp_f.write( "Content-Disposition: form-data; name=\"%s\"; filename=\"%s\"" % ( key, f_name,)) tmp_f.write(crlf) tmp_f.write("Content-Type: application/octet-stream" + crlf) tmp_f.write("Content-Transfer-Encoding: binary" + crlf + crlf) f_obj.seek(0) while True: chunk = f_obj.read(65536) if not chunk: break tmp_f.write(chunk) tmp_f.write(crlf) tmp_f.write("--" + boundary + "--" + crlf + crlf) tmp_f.flush() return tmp_f, tmp_path
def _commit_message(self, entropy_server, successfull_mirrors): """ Ask user to enter the commit message for data being pushed. Store inside rss metadata object. """ enc = etpConst['conf_encoding'] tmp_fd, tmp_commit_path = const_mkstemp( prefix="eit._push", suffix=".COMMIT_MSG") with entropy.tools.codecs_fdopen(tmp_fd, "w", enc) as tmp_f: tmp_f.write(EitPush.DEFAULT_REPO_COMMIT_MSG) if successfull_mirrors: tmp_f.write(const_convert_to_unicode( "# Changes to be committed:\n")) for sf_mirror in sorted(successfull_mirrors): tmp_f.write(const_convert_to_unicode( "#\t updated: %s\n" % (sf_mirror,))) # spawn editor cm_msg_rc = entropy_server.edit_file(tmp_commit_path) commit_msg = None if not cm_msg_rc: # wtf?, fallback to old way def fake_callback(*args, **kwargs): return True input_params = [ ('message', _("Commit message"), fake_callback, False)] commit_data = entropy_server.input_box( _("Enter the commit message"), input_params, cancel_button = True) if commit_data: commit_msg = const_convert_to_unicode(commit_data['message']) else: commit_msg = const_convert_to_unicode("") with codecs.open(tmp_commit_path, "r", encoding=enc) as tmp_f: for line in tmp_f.readlines(): if line.strip().startswith("#"): continue commit_msg += line entropy_server.output(commit_msg) os.remove(tmp_commit_path) return commit_msg
def do_compare_gpg(pkg_path, hash_val): try: repo_sec = self._entropy.RepositorySecurity() except RepositorySecurity.GPGServiceNotAvailable: return None # check if we have repository pubkey try: if not repo_sec.is_pubkey_available(repository_id): return None except repo_sec.KeyExpired: # key is expired return None # write gpg signature to disk for verification tmp_fd, tmp_path = const_mkstemp(prefix="do_compare_gpg") with os.fdopen(tmp_fd, "w") as tmp_f: tmp_f.write(hash_val) try: # actually verify valid, err_msg = repo_sec.verify_file( repository_id, pkg_path, tmp_path) finally: os.remove(tmp_path) if valid: return True if err_msg: self._entropy.output( "%s: %s, %s" % ( darkred(_("Package signature verification error for")), purple("GPG"), err_msg, ), importance = 0, level = "error", header = darkred(" ## ") ) return False
def _interactive_merge_diff(self, source, destination): tmp_fd, tmp_path = None, None try: tmp_fd, tmp_path = const_mkstemp( suffix="equo.conf.intmerge") args = ("/usr/bin/sdiff", "-o", tmp_path, source, destination) rc = subprocess.call(args) except OSError as err: if err.errno != errno.ENOENT: raise rc = 2 os.remove(tmp_path) finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass return tmp_path, rc
def generate_content_safety_file(content_safety): """ Generate a file containing the "content_safety" metadata, reading by content_safety list or iterator. Each item of "content_safety" must contain (path, sha256, mtime). Each item shall be written to file, one per line, in the following form: "<mtime>|<sha256>|<path>". The order of the element in "content_safety" will be kept. """ tmp_dir = os.path.join( etpConst['entropyunpackdir'], "__generate_content_safety_file_f") try: os.makedirs(tmp_dir, 0o755) except OSError as err: if err.errno != errno.EEXIST: raise tmp_fd, tmp_path = None, None generated = False try: tmp_fd, tmp_path = const_mkstemp( prefix="PackageContentSafety", dir=tmp_dir) with FileContentSafetyWriter(tmp_fd) as tmp_f: for path, sha256, mtime in content_safety: tmp_f.write(path, sha256, mtime) generated = True return tmp_path finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass if tmp_path is not None and not generated: try: os.remove(tmp_path) except (OSError, IOError): pass
def read_xpak(tbz2file): """ docstring_title @param tbz2file: @type tbz2file: @return: @rtype: """ tmp_fd, tmp_path = const_mkstemp( prefix="entropy.spm.portage.read_xpak") os.close(tmp_fd) try: done = suck_xpak(tbz2file, tmp_path) if not done: return None with open(tmp_path, "rb") as f: data = f.read() return data finally: os.remove(tmp_path)
def data_send_available(self): """ Return whether data send is correctly working. A temporary file with random content is sent to the service, that would need to calculate its md5 hash. For security reason, data will be accepted remotely if, and only if its size is < 256 bytes. """ md5 = hashlib.md5() test_str = const_convert_to_rawstring("") for x in range(256): test_str += chr(x) md5.update(test_str) expected_hash = md5.hexdigest() func_name = "data_send_available" tmp_fd, tmp_path = const_mkstemp(prefix="data_send_available") try: with os.fdopen(tmp_fd, "ab+") as tmp_f: tmp_f.write(test_str) tmp_f.seek(0) params = { "test_param": "hello", } file_params = { "test_file": ("test_file.txt", tmp_f), } remote_hash = self._method_getter(func_name, params, cache = False, require_credentials = False, file_params = file_params) finally: os.remove(tmp_path) const_debug_write(__name__, "WebService.%s, expected: %s, got: %s" % ( func_name, repr(expected_hash), repr(remote_hash),)) return expected_hash == remote_hash
def handle_exception(exc_class, exc_instance, exc_tb): # restore original exception handler, to avoid loops uninstall_exception_handler() _text = TextInterface() if exc_class is SystemDatabaseError: _text.output( darkred(_("Installed packages repository corrupted. " "Please re-generate it")), importance=1, level="error") os._exit(101) generic_exc_classes = (OnlineMirrorError, RepositoryError, PermissionDenied, FileNotFound, SPMError, SystemError) if exc_class in generic_exc_classes: _text.output( "%s: %s" % (exc_instance, darkred(_("Cannot continue")),), importance=1, level="error") os._exit(1) if exc_class is SystemExit: return if issubclass(exc_class, IOError): # in Python 3.3+ it's BrokenPipeError if exc_instance.errno == errno.EPIPE: return if exc_class is KeyboardInterrupt: os._exit(1) t_back = entropy.tools.get_traceback(tb_obj = exc_tb) if const_debug_enabled(): sys.stdout = sys.__stdout__ sys.stderr = sys.__stderr__ sys.stdin = sys.__stdin__ entropy.tools.print_exception(tb_data = exc_tb) pdb.set_trace() if exc_class in (IOError, OSError): if exc_instance.errno == errno.ENOSPC: print_generic(t_back) _text.output( "%s: %s" % ( exc_instance, darkred(_("Your hard drive is full! Your fault!")),), importance=1, level="error") os._exit(5) elif exc_instance.errno == errno.ENOMEM: print_generic(t_back) _text.output( "%s: %s" % ( exc_instance, darkred(_("No more memory dude! Your fault!")),), importance=1, level="error") os._exit(5) _text.output( darkred(_("Hi. My name is Bug Reporter. " "I am sorry to inform you that the program crashed. " "Well, you know, shit happens.")), importance=1, level="error") _text.output( darkred(_("But there's something you could " "do to help me to be a better application.")), importance=1, level="error") _text.output( darkred( _("-- BUT, DO NOT SUBMIT THE SAME REPORT MORE THAN ONCE --")), importance=1, level="error") _text.output( darkred( _("Now I am showing you what happened. " "Don't panic, I'm here to help you.")), importance=1, level="error") entropy.tools.print_exception(tb_data = exc_tb) exception_data = entropy.tools.print_exception(silent = True, tb_data = exc_tb, all_frame_data = True) exception_tback_raw = const_convert_to_rawstring(t_back) error_fd, error_file = None, None try: error_fd, error_file = const_mkstemp( prefix="entropy.error.report.", suffix=".txt") with os.fdopen(error_fd, "wb") as ferror: ferror.write( const_convert_to_rawstring( "\nRevision: %s\n\n" % ( etpConst['entropyversion'],)) ) ferror.write( exception_tback_raw) ferror.write( const_convert_to_rawstring("\n\n")) ferror.write( const_convert_to_rawstring(''.join(exception_data))) ferror.write( const_convert_to_rawstring("\n")) except (OSError, IOError) as err: _text.output( "%s: %s" % ( err, darkred( _("Oh well, I cannot even write to TMPDIR. " "So, please copy the error and " "mail [email protected]."))), importance=1, level="error") os._exit(1) finally: if error_fd is not None: try: os.close(error_fd) except OSError: pass _text.output("", level="error") ask_msg = _("Erm... Can I send the error, " "along with some other information\nabout your " "hardware to my creators so they can fix me? " "(Your IP will be logged)") rc = _text.ask_question(ask_msg) if rc == _("No"): _text.output( darkgreen(_("Ok, ok ok ok... Sorry!")), level="error") os._exit(2) _text.output( darkgreen( _("If you want to be contacted back " "(and actively supported), also answer " "the questions below:") ), level="error") try: name = readtext(_("Your Full name:")) email = readtext(_("Your E-Mail address:")) description = readtext(_("What you were doing:")) except EOFError: os._exit(2) try: from entropy.client.interfaces.qa import UGCErrorReport from entropy.core.settings.base import SystemSettings _settings = SystemSettings() repository_id = _settings['repositories']['default_repository'] error = UGCErrorReport(repository_id) except (OnlineMirrorError, AttributeError, ImportError,): error = None result = None if error is not None: error.prepare(exception_tback_raw, name, email, '\n'.join([x for x in exception_data]), description) result = error.submit() if result: _text.output( darkgreen( _("Thank you very much. The error has been " "reported and hopefully, the problem will " "be solved as soon as possible.")), level="error") else: _text.output( darkred(_("Ugh. Cannot send the report. " "Please mail the file below " "to [email protected].")), level="error") _text.output("", level="error") _text.output("==> %s" % (error_file,), level="error") _text.output("", level="error")
def _inflate(self, entropy_client): """ Solo Pkg Inflate command. """ files = self._nsargs.files savedir = self._nsargs.savedir if not os.path.isdir(savedir) and not os.path.exists(savedir): # this is validated by the parser # but not in case of no --savedir provided const_setup_directory(savedir) if not os.path.exists(savedir): entropy_client.output( "%s: %s" % ( brown(_("broken directory path")), savedir,), level="error", importance=1) return 1 spm = entropy_client.Spm() for _file in files: entropy_client.output( "%s: %s" % ( teal(_("working on package file")), purple(_file)), header=darkred(" @@ "), back=True) file_name = os.path.basename(_file) package_path = os.path.join(savedir, file_name) if os.path.realpath(_file) != os.path.realpath(package_path): # make a copy first shutil.copy2(_file, package_path) pkg_data = spm.extract_package_metadata(package_path) entropy_client.output( "%s: %s" % ( teal(_("package file extraction complete")), purple(package_path)), header=darkred(" @@ "), back=True) # append development revision number # and create final package file name sha1 = None signatures = pkg_data.get('signatures') if signatures is not None: sha1 = signatures['sha1'] pkg_data['revision'] = etpConst['spmetprev'] download_dirpath = entropy.tools.create_package_dirpath( pkg_data['branch'], nonfree=False, restricted=False) download_name = entropy.dep.create_package_relative_path( pkg_data['category'], pkg_data['name'], pkg_data['version'], pkg_data['versiontag'], ext=etpConst['packagesext'], revision=pkg_data['revision'], sha1=sha1) pkg_data['download'] = download_dirpath + os.path.sep + \ download_name # migrate to the proper format final_path = os.path.join(savedir, download_name) if package_path != final_path: try: os.makedirs(os.path.dirname(final_path)) except OSError as err: if err.errno != errno.EISDIR: raise shutil.move(package_path, final_path) package_path = final_path tmp_fd, tmp_path = const_mkstemp( prefix="equo.smart.inflate.", dir=savedir) os.close(tmp_fd) # attach entropy metadata to package file repo = entropy_client.open_generic_repository(tmp_path) repo.initializeRepository() _package_id = repo.addPackage( pkg_data, revision=pkg_data['revision']) repo.commit() repo.close() entropy_client.output( "%s: %s" % ( teal(_("package metadata generation complete")), purple(package_path)), header=darkred(" @@ "), back=True) entropy.tools.aggregate_entropy_metadata( package_path, tmp_path) os.remove(tmp_path) entropy_client.output( "%s: %s" % ( teal(_("package file generated at")), purple(package_path)), header=darkred(" @@ ")) return 0
def handle_exception(exc_class, exc_instance, exc_tb): # restore original exception handler, to avoid loops uninstall_exception_handler() _text = TextInterface() if exc_class is SystemDatabaseError: _text.output(darkred( _("Installed packages repository corrupted. " "Please re-generate it")), importance=1, level="error") os._exit(101) generic_exc_classes = (OnlineMirrorError, RepositoryError, PermissionDenied, FileNotFound, SPMError, SystemError) if exc_class in generic_exc_classes: _text.output("%s: %s" % ( exc_instance, darkred(_("Cannot continue")), ), importance=1, level="error") os._exit(1) if exc_class is SystemExit: return if issubclass(exc_class, IOError): # in Python 3.3+ it's BrokenPipeError if exc_instance.errno == errno.EPIPE: return if exc_class is KeyboardInterrupt: os._exit(1) t_back = entropy.tools.get_traceback(tb_obj=exc_tb) if const_debug_enabled(): sys.stdout = sys.__stdout__ sys.stderr = sys.__stderr__ sys.stdin = sys.__stdin__ entropy.tools.print_exception(tb_data=exc_tb) pdb.set_trace() if exc_class in (IOError, OSError): if exc_instance.errno == errno.ENOSPC: print_generic(t_back) _text.output("%s: %s" % ( exc_instance, darkred(_("Your hard drive is full! Your fault!")), ), importance=1, level="error") os._exit(5) elif exc_instance.errno == errno.ENOMEM: print_generic(t_back) _text.output("%s: %s" % ( exc_instance, darkred(_("No more memory dude! Your fault!")), ), importance=1, level="error") os._exit(5) _text.output(darkred( _("Hi. My name is Bug Reporter. " "I am sorry to inform you that the program crashed. " "Well, you know, shit happens.")), importance=1, level="error") _text.output(darkred( _("But there's something you could " "do to help me to be a better application.")), importance=1, level="error") _text.output(darkred( _("-- BUT, DO NOT SUBMIT THE SAME REPORT MORE THAN ONCE --")), importance=1, level="error") _text.output(darkred( _("Now I am showing you what happened. " "Don't panic, I'm here to help you.")), importance=1, level="error") entropy.tools.print_exception(tb_data=exc_tb) exception_data = entropy.tools.print_exception(silent=True, tb_data=exc_tb, all_frame_data=True) exception_tback_raw = const_convert_to_rawstring(t_back) error_fd, error_file = None, None try: error_fd, error_file = const_mkstemp(prefix="entropy.error.report.", suffix=".txt") with os.fdopen(error_fd, "wb") as ferror: ferror.write( const_convert_to_rawstring("\nRevision: %s\n\n" % (etpConst['entropyversion'], ))) ferror.write(exception_tback_raw) ferror.write(const_convert_to_rawstring("\n\n")) ferror.write(const_convert_to_rawstring(''.join(exception_data))) ferror.write(const_convert_to_rawstring("\n")) except (OSError, IOError) as err: _text.output("%s: %s" % (err, darkred( _("Oh well, I cannot even write to TMPDIR. " "So, please copy the error and " "mail [email protected]."))), importance=1, level="error") os._exit(1) finally: if error_fd is not None: try: os.close(error_fd) except OSError: pass _text.output("", level="error") ask_msg = _("Erm... Can I send the error, " "along with some other information\nabout your " "hardware to my creators so they can fix me? " "(Your IP will be logged)") rc = _text.ask_question(ask_msg) if rc == _("No"): _text.output(darkgreen(_("Ok, ok ok ok... Sorry!")), level="error") os._exit(2) _text.output(darkgreen( _("If you want to be contacted back " "(and actively supported), also answer " "the questions below:")), level="error") try: name = readtext(_("Your Full name:")) email = readtext(_("Your E-Mail address:")) description = readtext(_("What you were doing:")) except EOFError: os._exit(2) try: from entropy.client.interfaces.qa import UGCErrorReport from entropy.core.settings.base import SystemSettings _settings = SystemSettings() repository_id = _settings['repositories']['default_repository'] error = UGCErrorReport(repository_id) except ( OnlineMirrorError, AttributeError, ImportError, ): error = None result = None if error is not None: error.prepare(exception_tback_raw, name, email, '\n'.join([x for x in exception_data]), description) result = error.submit() if result: _text.output(darkgreen( _("Thank you very much. The error has been " "reported and hopefully, the problem will " "be solved as soon as possible.")), level="error") else: _text.output(darkred( _("Ugh. Cannot send the report. " "Please mail the file below " "to [email protected].")), level="error") _text.output("", level="error") _text.output("==> %s" % (error_file, ), level="error") _text.output("", level="error")
def _pkgmove(self, entropy_server): """ Open $EDITOR and let user add/remove package moves. """ notice_text = """\ # This is the package move metadata for repository %s, read this: # - the statements must start with either "move" or "slotmove". # - move statement syntax: # <unix time> move <from package key> <to package key> # - slotmove statement syntax: # <unix time> slotmove <package dependency> <from slot> <to slot> # - the order of the statements is given by the unix time (ASC). # - lines not starting with "<unix time> move" or "<unix time> slotmove" # will be ignored. # - any line starting with "#" will be ignored as well. # # Example: # 1319039371.22 move app-foo/bar app-bar/foo # 1319039323.10 slotmove >=x11-libs/foo-2.0 0 2.0 """ % (self._repository_id,) tmp_path = None branch = self._settings()['repositories']['branch'] repo = entropy_server.open_server_repository( self._repository_id, read_only=False) treeupdates = [(unix_time, t_action) for \ idupd, t_repo, t_action, t_branch, unix_time in \ repo.listAllTreeUpdatesActions() \ if t_repo == self._repository_id \ and t_branch == branch] key_sorter = lambda x: x[0] treeupdates.sort(key=key_sorter) new_actions = [] while True: if tmp_path is None: tmp_fd, tmp_path = const_mkstemp( prefix = 'entropy.server.pkgmove', suffix = ".conf") with os.fdopen(tmp_fd, "w") as tmp_f: tmp_f.write(notice_text) for unix_time, action in treeupdates: tmp_f.write("%s %s\n" % (unix_time, action)) tmp_f.flush() success = entropy_server.edit_file(tmp_path) if not success: # retry ? os.remove(tmp_path) return 1 del new_actions[:] invalid_lines = [] with codecs.open(tmp_path, "r", encoding="utf-8") as tmp_f: for line in tmp_f.readlines(): if line.startswith("#"): # skip continue strip_line = line.strip() if not strip_line: # ignore continue split_line = strip_line.split() try: unix_time = split_line.pop(0) unix_time = str(float(unix_time)) except ValueError: # invalid unix time entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line (time field)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue except IndexError: entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line (empty)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue if not split_line: # nothing left?? entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line (incomplete)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue cmd = split_line.pop(0) if cmd == "move": if len(split_line) != 2: entropy_server.output( "%s: %s !!!" % (_("invalid line"), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) elif split_line[0] == split_line[1]: entropy_server.output( "%s: %s !!!" % ( _("invalid line (copy)"), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) else: new_action = " ".join(["move"] + split_line) new_actions.append((unix_time, new_action)) elif cmd == "slotmove": if len(split_line) != 3: entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) elif split_line[1] == split_line[2]: entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line (copy)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) else: new_action = " ".join(["slotmove"] + split_line) new_actions.append((unix_time, new_action)) else: entropy_server.output( "%s: %s !!!" % ( purple(_("invalid line")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) if invalid_lines: resp = entropy_server.ask_question( _("Invalid syntax, what to do ?"), responses=(_("Repeat"), _("Abort"))) if resp == _("Abort"): os.remove(tmp_path) return 1 else: # repeat, edit same file continue # show submitted info new_actions.sort(key=key_sorter) for unix_time, action in new_actions: entropy_server.output( "%s %s" % (unix_time, action), level="generic") entropy_server.output("", level="generic") # ask confirmation while True: try: rc_question = entropy_server.ask_question( "[%s] %s" % ( purple(self._repository_id), teal(_("Do you agree?")) ), responses = (_("Yes"), _("Repeat"), _("No"),) ) except KeyboardInterrupt: # do not allow, we're in a critical region continue break if rc_question == _("Yes"): break elif rc_question == _("No"): return 1 # otherwise repeat everything again # keep tmp_path if tmp_path is not None: try: os.remove(tmp_path) except (OSError) as err: if err.errno != errno.ENOENT: raise # write new actions actions_meta = [] # time is completely fake, no particular precision required for unix_time, action in new_actions: # make sure unix_time has final .XX if "." not in unix_time: unix_time += ".00" elif unix_time.index(".") == (len(unix_time) - 2): # only .X and not .XX unix_time += "0" actions_meta.append((action, branch, unix_time)) repo.removeTreeUpdatesActions(self._repository_id) try: repo.insertTreeUpdatesActions(actions_meta, self._repository_id) except Exception as err: repo.rollback() raise repo.commit() return 0
def _inflate(self, entropy_client): """ Solo Pkg Inflate command. """ files = self._nsargs.files savedir = self._nsargs.savedir if not os.path.isdir(savedir) and not os.path.exists(savedir): # this is validated by the parser # but not in case of no --savedir provided const_setup_directory(savedir) if not os.path.exists(savedir): entropy_client.output( "%s: %s" % ( brown(_("broken directory path")), savedir,), level="error", importance=1) return 1 spm = entropy_client.Spm() for _file in files: entropy_client.output( "%s: %s" % ( teal(_("working on package file")), purple(_file)), header=darkred(" @@ "), back=True) file_name = os.path.basename(_file) package_path = os.path.join(savedir, file_name) if os.path.realpath(_file) != os.path.realpath(package_path): # make a copy first shutil.copy2(_file, package_path) pkg_data = spm.extract_package_metadata(package_path) entropy_client.output( "%s: %s" % ( teal(_("package file extraction complete")), purple(package_path)), header=darkred(" @@ "), back=True) # append development revision number # and create final package file name sha1 = None signatures = pkg_data.get('signatures') if signatures is not None: sha1 = signatures['sha1'] pkg_data['revision'] = etpConst['spmetprev'] download_dirpath = entropy.tools.create_package_dirpath( pkg_data['branch'], nonfree=False, restricted=False) download_name = entropy.dep.create_package_relative_path( pkg_data['category'], pkg_data['name'], pkg_data['version'], pkg_data['versiontag'], ext=etpConst['packagesext'], revision=pkg_data['revision'], sha1=sha1) pkg_data['download'] = download_dirpath + os.path.sep + \ download_name # migrate to the proper format final_path = os.path.join(savedir, download_name) if package_path != final_path: try: os.makedirs(os.path.dirname(final_path)) except OSError as err: if err.errno not in (errno.EISDIR, errno.EEXIST): raise shutil.move(package_path, final_path) package_path = final_path tmp_fd, tmp_path = const_mkstemp( prefix="equo.smart.inflate.", dir=savedir) os.close(tmp_fd) # attach entropy metadata to package file repo = entropy_client.open_generic_repository(tmp_path) repo.initializeRepository() _package_id = repo.addPackage( pkg_data, revision=pkg_data['revision']) repo.commit() repo.close() entropy_client.output( "%s: %s" % ( teal(_("package metadata generation complete")), purple(package_path)), header=darkred(" @@ "), back=True) entropy.tools.aggregate_entropy_metadata( package_path, tmp_path) os.remove(tmp_path) entropy_client.output( "%s: %s" % ( teal(_("package file generated at")), purple(package_path)), header=darkred(" @@ ")) return 0
def dumpobj(name, my_object, complete_path=False, ignore_exceptions=True, dump_dir=None, custom_permissions=None): """ Dump pickable object to file @param name: name of the object @type name: string @param my_object: object to dump @type my_object: any Python "pickable" object @keyword complete_path: consider "name" argument as a complete path (this overrides the default dump path given by etpConst['dumpstoragedir']) @type complete_path: bool @keyword ignore_exceptions: ignore any possible exception (EOFError, IOError, OSError,) @type ignore_exceptions: bool @keyword dump_dir: alternative dump directory @type dump_dir: string @keyword custom_permissions: give custom permission bits @type custom_permissions: octal @return: None @rtype: None @raise EOFError: could be caused by pickle.dump, ignored if ignore_exceptions is True @raise IOError: could be caused by pickle.dump, ignored if ignore_exceptions is True @raise OSError: could be caused by pickle.dump, ignored if ignore_exceptions is True """ if dump_dir is None: dump_dir = D_DIR if custom_permissions is None: custom_permissions = 0o664 while True: # trap ctrl+C tmp_fd, tmp_dmpfile = None, None try: if complete_path: dmpfile = name c_dump_dir = os.path.dirname(name) else: _dmp_path = os.path.join(dump_dir, name) dmpfile = _dmp_path + D_EXT c_dump_dir = os.path.dirname(_dmp_path) my_dump_dir = c_dump_dir d_paths = [] while not os.path.isdir(my_dump_dir): d_paths.append(my_dump_dir) my_dump_dir = os.path.dirname(my_dump_dir) if d_paths: d_paths = sorted(d_paths) for d_path in d_paths: os.mkdir(d_path) const_setup_file(d_path, E_GID, 0o775) dmp_name = os.path.basename(dmpfile) tmp_fd, tmp_dmpfile = const_mkstemp(dir=c_dump_dir, prefix=dmp_name) # WARNING: it has been observed that using # os.fdopen() below in multi-threaded scenarios # is causing EBADF. There is probably a race # condition down in the stack. with open(tmp_dmpfile, "wb") as dmp_f: if const_is_python3(): pickle.dump(my_object, dmp_f, protocol=COMPAT_PICKLE_PROTOCOL, fix_imports=True) else: pickle.dump(my_object, dmp_f) const_setup_file(tmp_dmpfile, E_GID, custom_permissions) os.rename(tmp_dmpfile, dmpfile) except RuntimeError: try: os.remove(dmpfile) except OSError: pass except (EOFError, IOError, OSError): if not ignore_exceptions: raise finally: if tmp_fd is not None: try: os.close(tmp_fd) except (IOError, OSError): pass if tmp_dmpfile is not None: try: os.remove(tmp_dmpfile) except (IOError, OSError): pass break
def _try_edelta_multifetch_internal(self, url_path_list, url_data_map, resume): """ _try_edelta_multifetch(), assuming that the relevant file locks are held. """ @contextlib.contextmanager def download_context(path): lock = None try: lock = self.path_lock(path) with lock.exclusive(): yield finally: if lock is not None: lock.close() def pre_download_hook(path, _download_id): # assume that, if path is available, it's been # downloaded already. checksum verification will # happen afterwards. if self._stat_path(path): # this is not None and not an established # UrlFetcher return code code return self fetch_abort_function = self._meta.get('fetch_abort_function') fetch_intf = self._entropy._multiple_url_fetcher( url_path_list, resume=resume, abort_check_func=fetch_abort_function, url_fetcher_class=self._entropy._url_fetcher, download_context_func=download_context, pre_download_hook=pre_download_hook) try: # make sure that we don't need to abort already # doing the check here avoids timeouts if fetch_abort_function != None: fetch_abort_function() data = fetch_intf.download() except (KeyboardInterrupt, InterruptError): return [], 0.0, -100 data_transfer = fetch_intf.get_transfer_rate() fetch_errors = ( UrlFetcher.TIMEOUT_FETCH_ERROR, UrlFetcher.GENERIC_FETCH_ERROR, UrlFetcher.GENERIC_FETCH_WARN, ) valid_idxs = [] for url_data_map_idx, cksum in tuple(data.items()): if cksum in fetch_errors: # download failed continue (pkg_id, repository_id, url, dest_path, orig_cksum, _signs, _edelta_url, edelta_download_path, installed_download_path) = url_data_map[url_data_map_idx] dest_path_dir = os.path.dirname(dest_path) lock = None try: lock = self.path_lock(edelta_download_path) with lock.shared(): tmp_fd, tmp_path = None, None try: tmp_fd, tmp_path = const_mkstemp( dir=dest_path_dir, suffix=".edelta_pkg_tmp") try: entropy.tools.apply_entropy_delta( installed_download_path, # best effort read edelta_download_path, # shared lock tmp_path) # atomically created path except IOError: continue os.rename(tmp_path, dest_path) valid_idxs.append(url_data_map_idx) finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass if tmp_path is not None: try: os.remove(tmp_path) except OSError: pass finally: if lock is not None: lock.close() fetched_url_data = [] for url_data_map_idx in valid_idxs: (pkg_id, repository_id, url, dest_path, orig_cksum, signs, _edelta_url, edelta_download_path, installed_download_path) = url_data_map[url_data_map_idx] try: valid = entropy.tools.compare_md5(dest_path, orig_cksum) except (IOError, OSError): valid = False if valid: url_data_item = (pkg_id, repository_id, url, dest_path, orig_cksum, signs) fetched_url_data.append(url_data_item) return fetched_url_data, data_transfer, 0
def _add_notice_editor(self, entropy_server, repository_id): """ Open $EDITOR and let user write his notice-board entry. """ notice_text = """ # Please enter the notice-board text above as follows: # - the first line is considered the title # - the following lines are considered the actual message # body of the notice-board entry # - the last line, if starting with URL: will be parsed as the # actual notice-board entry URL # - any line starting with "#" will be removed # # Example: # [message title] # [message # body] # URL: http://<url> """ notice_title = None notice_body = "" notice_url = None tmp_path = None while True: if tmp_path is None: tmp_fd, tmp_path = const_mkstemp(prefix='entropy.server', suffix=".conf") with os.fdopen(tmp_fd, "w") as tmp_f: tmp_f.write(notice_text) tmp_f.flush() success = entropy_server.edit_file(tmp_path) if not success: # retry ? os.remove(tmp_path) tmp_path = None continue notice_title = None notice_body = "" notice_url = None all_good = False with codecs.open(tmp_path, "r", encoding="utf-8") as tmp_f: line = None last_line = None for line in tmp_f.readlines(): if line.startswith("#"): # skip continue strip_line = line.strip() if not strip_line: # ignore continue if notice_title is None: notice_title = strip_line continue last_line = line notice_body += line if last_line is not None: if last_line.startswith("URL:"): url = last_line.strip()[4:].strip() if url: notice_url = url # drop last line then notice_body = notice_body[:-len(last_line)] if notice_title and notice_body: all_good = True if not all_good: os.remove(tmp_path) tmp_path = None continue # show submitted info entropy_server.output( "%s: %s" % (darkgreen(_("Title")), purple(notice_title)), level="generic") entropy_server.output("", level="generic") entropy_server.output(notice_body, level="generic") url = notice_url if url is None: url = _("no URL") entropy_server.output("%s: %s" % (teal(_("URL")), brown(url)), level="generic") entropy_server.output("", level="generic") # ask confirmation while True: try: rc_question = entropy_server.ask_question( "[%s] %s" % (purple(repository_id), teal(_("Do you agree?"))), responses=( _("Yes"), _("Repeat"), _("No"), )) except KeyboardInterrupt: # do not allow, we're in a critical region continue break if rc_question == _("Yes"): break elif rc_question == _("No"): return None, None, None # otherwise repeat everything again # keep tmp_path if tmp_path is not None: try: os.remove(tmp_path) except (OSError) as err: if err.errno != errno.ENOENT: raise return notice_title, notice_body, notice_url
def _add_notice_editor(self, entropy_server, repository_id): """ Open $EDITOR and let user write his notice-board entry. """ notice_text = """ # Please enter the notice-board text above as follows: # - the first line is considered the title # - the following lines are considered the actual message # body of the notice-board entry # - the last line, if starting with URL: will be parsed as the # actual notice-board entry URL # - any line starting with "#" will be removed # # Example: # [message title] # [message # body] # URL: http://<url> """ notice_title = None notice_body = "" notice_url = None tmp_path = None while True: if tmp_path is None: tmp_fd, tmp_path = const_mkstemp( prefix = 'entropy.server', suffix = ".conf") with os.fdopen(tmp_fd, "w") as tmp_f: tmp_f.write(notice_text) tmp_f.flush() success = entropy_server.edit_file(tmp_path) if not success: # retry ? os.remove(tmp_path) tmp_path = None continue notice_title = None notice_body = "" notice_url = None all_good = False with codecs.open(tmp_path, "r", encoding="utf-8") as tmp_f: line = None last_line = None for line in tmp_f.readlines(): if line.startswith("#"): # skip continue strip_line = line.strip() if not strip_line: # ignore continue if notice_title is None: notice_title = strip_line continue last_line = line notice_body += line if last_line is not None: if last_line.startswith("URL:"): url = last_line.strip()[4:].strip() if url: notice_url = url # drop last line then notice_body = notice_body[:-len(last_line)] if notice_title and notice_body: all_good = True if not all_good: os.remove(tmp_path) tmp_path = None continue # show submitted info entropy_server.output( "%s: %s" % ( darkgreen(_("Title")), purple(notice_title)), level="generic") entropy_server.output("", level="generic") entropy_server.output( notice_body, level="generic") url = notice_url if url is None: url = _("no URL") entropy_server.output( "%s: %s" % (teal(_("URL")), brown(url)), level="generic") entropy_server.output("", level="generic") # ask confirmation while True: try: rc_question = entropy_server.ask_question( "[%s] %s" % ( purple(repository_id), teal(_("Do you agree?")) ), responses = (_("Yes"), _("Repeat"), _("No"),) ) except KeyboardInterrupt: # do not allow, we're in a critical region continue break if rc_question == _("Yes"): break elif rc_question == _("No"): return None, None, None # otherwise repeat everything again # keep tmp_path if tmp_path is not None: try: os.remove(tmp_path) except (OSError) as err: if err.errno != errno.ENOENT: raise return notice_title, notice_body, notice_url
def _pkgmove(self, entropy_server): """ Open $EDITOR and let user add/remove package moves. """ notice_text = """\ # This is the package move metadata for repository %s, read this: # - the statements must start with either "move" or "slotmove". # - move statement syntax: # <unix time> move <from package key> <to package key> # - slotmove statement syntax: # <unix time> slotmove <package dependency> <from slot> <to slot> # - the order of the statements is given by the unix time (ASC). # - lines not starting with "<unix time> move" or "<unix time> slotmove" # will be ignored. # - any line starting with "#" will be ignored as well. # # Example: # 1319039371.22 move app-foo/bar app-bar/foo # 1319039323.10 slotmove >=x11-libs/foo-2.0 0 2.0 """ % (self._repository_id, ) tmp_path = None branch = self._settings()['repositories']['branch'] repo = entropy_server.open_server_repository(self._repository_id, read_only=False) treeupdates = [(unix_time, t_action) for \ idupd, t_repo, t_action, t_branch, unix_time in \ repo.listAllTreeUpdatesActions() \ if t_repo == self._repository_id \ and t_branch == branch] key_sorter = lambda x: x[0] treeupdates.sort(key=key_sorter) new_actions = [] while True: if tmp_path is None: tmp_fd, tmp_path = const_mkstemp( prefix='entropy.server.pkgmove', suffix=".conf") with os.fdopen(tmp_fd, "w") as tmp_f: tmp_f.write(notice_text) for unix_time, action in treeupdates: tmp_f.write("%s %s\n" % (unix_time, action)) tmp_f.flush() success = entropy_server.edit_file(tmp_path) if not success: # retry ? os.remove(tmp_path) return 1 del new_actions[:] invalid_lines = [] with codecs.open(tmp_path, "r", encoding="utf-8") as tmp_f: for line in tmp_f.readlines(): if line.startswith("#"): # skip continue strip_line = line.strip() if not strip_line: # ignore continue split_line = strip_line.split() try: unix_time = split_line.pop(0) unix_time = str(float(unix_time)) except ValueError: # invalid unix time entropy_server.output("%s: %s !!!" % (purple( _("invalid line (time field)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue except IndexError: entropy_server.output( "%s: %s !!!" % (purple(_("invalid line (empty)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue if not split_line: # nothing left?? entropy_server.output("%s: %s !!!" % (purple( _("invalid line (incomplete)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) continue cmd = split_line.pop(0) if cmd == "move": if len(split_line) != 2: entropy_server.output( "%s: %s !!!" % (_("invalid line"), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) elif split_line[0] == split_line[1]: entropy_server.output( "%s: %s !!!" % (_("invalid line (copy)"), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) else: new_action = " ".join(["move"] + split_line) new_actions.append((unix_time, new_action)) elif cmd == "slotmove": if len(split_line) != 3: entropy_server.output( "%s: %s !!!" % (purple(_("invalid line")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) elif split_line[1] == split_line[2]: entropy_server.output( "%s: %s !!!" % (purple(_("invalid line (copy)")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) else: new_action = " ".join(["slotmove"] + split_line) new_actions.append((unix_time, new_action)) else: entropy_server.output( "%s: %s !!!" % (purple(_("invalid line")), strip_line), importance=1, level="warning") invalid_lines.append(strip_line) if invalid_lines: resp = entropy_server.ask_question( _("Invalid syntax, what to do ?"), responses=(_("Repeat"), _("Abort"))) if resp == _("Abort"): os.remove(tmp_path) return 1 else: # repeat, edit same file continue # show submitted info new_actions.sort(key=key_sorter) for unix_time, action in new_actions: entropy_server.output("%s %s" % (unix_time, action), level="generic") entropy_server.output("", level="generic") # ask confirmation while True: try: rc_question = entropy_server.ask_question( "[%s] %s" % (purple( self._repository_id), teal(_("Do you agree?"))), responses=( _("Yes"), _("Repeat"), _("No"), )) except KeyboardInterrupt: # do not allow, we're in a critical region continue break if rc_question == _("Yes"): break elif rc_question == _("No"): return 1 # otherwise repeat everything again # keep tmp_path if tmp_path is not None: try: os.remove(tmp_path) except (OSError) as err: if err.errno != errno.ENOENT: raise # write new actions actions_meta = [] # time is completely fake, no particular precision required for unix_time, action in new_actions: # make sure unix_time has final .XX if "." not in unix_time: unix_time += ".00" elif unix_time.index(".") == (len(unix_time) - 2): # only .X and not .XX unix_time += "0" actions_meta.append((action, branch, unix_time)) repo.removeTreeUpdatesActions(self._repository_id) try: repo.insertTreeUpdatesActions(actions_meta, self._repository_id) except Exception as err: repo.rollback() raise repo.commit() return 0
def _try_edelta_multifetch_internal(self, url_path_list, url_data_map, resume): """ _try_edelta_multifetch(), assuming that the relevant file locks are held. """ @contextlib.contextmanager def download_context(path): lock = None try: lock = self.path_lock(path) with lock.exclusive(): yield finally: if lock is not None: lock.close() def pre_download_hook(path, _download_id): # assume that, if path is available, it's been # downloaded already. checksum verification will # happen afterwards. if self._stat_path(path): # this is not None and not an established # UrlFetcher return code code return self fetch_abort_function = self._meta.get('fetch_abort_function') fetch_intf = self._entropy._multiple_url_fetcher( url_path_list, resume = resume, abort_check_func = fetch_abort_function, url_fetcher_class = self._entropy._url_fetcher, download_context_func = download_context, pre_download_hook = pre_download_hook) try: # make sure that we don't need to abort already # doing the check here avoids timeouts if fetch_abort_function != None: fetch_abort_function() data = fetch_intf.download() except (KeyboardInterrupt, InterruptError): return [], 0.0, -100 data_transfer = fetch_intf.get_transfer_rate() fetch_errors = ( UrlFetcher.TIMEOUT_FETCH_ERROR, UrlFetcher.GENERIC_FETCH_ERROR, UrlFetcher.GENERIC_FETCH_WARN, ) valid_idxs = [] for url_data_map_idx, cksum in tuple(data.items()): if cksum in fetch_errors: # download failed continue (pkg_id, repository_id, url, dest_path, orig_cksum, _signs, _edelta_url, edelta_download_path, installed_download_path) = url_data_map[url_data_map_idx] dest_path_dir = os.path.dirname(dest_path) lock = None try: lock = self.path_lock(edelta_download_path) with lock.shared(): tmp_fd, tmp_path = None, None try: tmp_fd, tmp_path = const_mkstemp( dir=dest_path_dir, suffix=".edelta_pkg_tmp") try: entropy.tools.apply_entropy_delta( installed_download_path, # best effort read edelta_download_path, # shared lock tmp_path) # atomically created path except IOError: continue os.rename(tmp_path, dest_path) valid_idxs.append(url_data_map_idx) finally: if tmp_fd is not None: try: os.close(tmp_fd) except OSError: pass if tmp_path is not None: try: os.remove(tmp_path) except OSError: pass finally: if lock is not None: lock.close() fetched_url_data = [] for url_data_map_idx in valid_idxs: (pkg_id, repository_id, url, dest_path, orig_cksum, signs, _edelta_url, edelta_download_path, installed_download_path) = url_data_map[url_data_map_idx] try: valid = entropy.tools.compare_md5(dest_path, orig_cksum) except (IOError, OSError): valid = False if valid: url_data_item = ( pkg_id, repository_id, url, dest_path, orig_cksum, signs ) fetched_url_data.append(url_data_item) return fetched_url_data, data_transfer, 0
def dumpobj(name, my_object, complete_path = False, ignore_exceptions = True, dump_dir = None, custom_permissions = None): """ Dump pickable object to file @param name: name of the object @type name: string @param my_object: object to dump @type my_object: any Python "pickable" object @keyword complete_path: consider "name" argument as a complete path (this overrides the default dump path given by etpConst['dumpstoragedir']) @type complete_path: bool @keyword ignore_exceptions: ignore any possible exception (EOFError, IOError, OSError,) @type ignore_exceptions: bool @keyword dump_dir: alternative dump directory @type dump_dir: string @keyword custom_permissions: give custom permission bits @type custom_permissions: octal @return: None @rtype: None @raise EOFError: could be caused by pickle.dump, ignored if ignore_exceptions is True @raise IOError: could be caused by pickle.dump, ignored if ignore_exceptions is True @raise OSError: could be caused by pickle.dump, ignored if ignore_exceptions is True """ if dump_dir is None: dump_dir = D_DIR if custom_permissions is None: custom_permissions = 0o664 while True: # trap ctrl+C tmp_fd, tmp_dmpfile = None, None try: if complete_path: dmpfile = name c_dump_dir = os.path.dirname(name) else: _dmp_path = os.path.join(dump_dir, name) dmpfile = _dmp_path+D_EXT c_dump_dir = os.path.dirname(_dmp_path) my_dump_dir = c_dump_dir d_paths = [] while not os.path.isdir(my_dump_dir): d_paths.append(my_dump_dir) my_dump_dir = os.path.dirname(my_dump_dir) if d_paths: d_paths = sorted(d_paths) for d_path in d_paths: os.mkdir(d_path) const_setup_file(d_path, E_GID, 0o775) dmp_name = os.path.basename(dmpfile) tmp_fd, tmp_dmpfile = const_mkstemp( dir=c_dump_dir, prefix=dmp_name) # WARNING: it has been observed that using # os.fdopen() below in multi-threaded scenarios # is causing EBADF. There is probably a race # condition down in the stack. with open(tmp_dmpfile, "wb") as dmp_f: if const_is_python3(): pickle.dump(my_object, dmp_f, protocol = COMPAT_PICKLE_PROTOCOL, fix_imports = True) else: pickle.dump(my_object, dmp_f) const_setup_file(tmp_dmpfile, E_GID, custom_permissions) os.rename(tmp_dmpfile, dmpfile) except RuntimeError: try: os.remove(dmpfile) except OSError: pass except (EOFError, IOError, OSError): if not ignore_exceptions: raise finally: if tmp_fd is not None: try: os.close(tmp_fd) except (IOError, OSError): pass if tmp_dmpfile is not None: try: os.remove(tmp_dmpfile) except (IOError, OSError): pass break
def _spmsync(self, entropy_client, inst_repo): """ Solo Smart Spmsync command. """ ask = self._nsargs.ask pretend = self._nsargs.pretend spm = entropy_client.Spm() entropy_client.output( "%s..." % ( teal(_("Scanning Source Package Manager repository")),), header=brown(" @@ "), back=True) spm_packages = spm.get_installed_packages() installed_packages = [] for spm_package in spm_packages: try: spm_package_id = spm.resolve_spm_package_uid( spm_package) except KeyError as err: entropy_client.output( "%s: %s, %s" % ( darkred(_("Cannot find package")), purple(spm_package), err,), level="warning", importance=1) continue installed_packages.append( (spm_package, spm_package_id,)) entropy_client.output( "%s..." % ( teal(_("Scanning Entropy repository")),), header=brown(" @@ "), back=True) installed_spm_uids = set() to_be_added = set() to_be_removed = set() # collect new packages for spm_package, spm_package_id in installed_packages: installed_spm_uids.add(spm_package_id) if not inst_repo.isSpmUidAvailable(spm_package_id): to_be_added.add((spm_package, spm_package_id)) # do some memoization to speed up the scanning _spm_key_slot_map = {} for _spm_pkg, _spm_pkg_id in to_be_added: key = entropy.dep.dep_getkey(_spm_pkg) obj = _spm_key_slot_map.setdefault(key, set()) try: slot = spm.get_installed_package_metadata( _spm_pkg, "SLOT") # workaround for ebuilds without SLOT if slot is None: slot = '0' obj.add(slot) except KeyError: continue # packages to be removed from the database repo_spm_uids = inst_repo.listAllSpmUids() for spm_package_id, package_id in repo_spm_uids: # skip packages without valid counter if spm_package_id < 0: continue if spm_package_id in installed_spm_uids: # legit, package is still there, skipskipskip continue if not to_be_added: # there is nothing to check in to_be_added to_be_removed.add(package_id) continue atom = inst_repo.retrieveAtom(package_id) add = True if atom: atomkey = entropy.dep.dep_getkey(atom) atomslot = inst_repo.retrieveSlot(package_id) spm_slots = _spm_key_slot_map.get(atomkey) if spm_slots is not None: if atomslot in spm_slots: # do not add to to_be_removed add = False if add: to_be_removed.add(package_id) if not to_be_removed and not to_be_added: entropy_client.output( darkgreen(_("Nothing to do")), importance=1) return 0 if to_be_removed: entropy_client.output( "%s:" % ( purple(_("These packages were removed")), ), importance=1, header=brown(" @@ ")) broken = set() for package_id in to_be_removed: atom = inst_repo.retrieveAtom(package_id) if not atom: broken.add(package_id) continue entropy_client.output( darkred(atom), header=brown(" # ")) to_be_removed -= broken if to_be_removed and not pretend: rc = _("Yes") accepted = True if ask: rc = entropy_client.ask_question( _("Continue ?")) if rc != _("Yes"): accepted = False if accepted: counter = 0 total = len(to_be_removed) for package_id in to_be_removed: counter += 1 atom = inst_repo.retrieveAtom(package_id) entropy_client.output( teal(atom), count=(counter, total), header=darkred(" --- ")) inst_repo.removePackage(package_id) inst_repo.commit() entropy_client.output( darkgreen(_("Removal complete")), importance=1, header=brown(" @@ ")) if to_be_added: entropy_client.output( "%s:" % ( purple(_("These packages were added")), ), importance=1, header=brown(" @@ ")) for _spm_package, _spm_package_id in to_be_added: entropy_client.output( darkgreen(_spm_package), header=brown(" # ")) if to_be_added and not pretend: if ask: rc = entropy_client.ask_question(_("Continue ?")) if rc != _("Yes"): return 1 total = len(to_be_added) counter = 0 # perf: only create temp file once tmp_fd, tmp_path = const_mkstemp( prefix="equo.rescue.spmsync") os.close(tmp_fd) spm_wanted = _SpmUserPackages(entropy_client, spm) for _spm_package, _spm_package_id in to_be_added: counter += 1 entropy_client.output( teal(_spm_package), count=(counter, total), header=darkgreen(" +++ ")) # make sure the file is empty with open(tmp_path, "w") as tmp_f: tmp_f.flush() appended = spm.append_metadata_to_package( _spm_package, tmp_path) if not appended: entropy_client.output( "%s: %s" % ( purple(_("Invalid package")), teal(_spm_package),), importance=1, header=darkred(" @@ ")) continue # now extract info try: data = spm.extract_package_metadata(tmp_path) except Exception as err: entropy.tools.print_traceback() entropy_client.output( "%s, %s: %s" % ( teal(_spm_package), purple(_("Metadata generation error")), err, ), level="warning", importance=1, header=darkred(" @@ ") ) continue # create atom string atom = entropy.dep.create_package_atom_string( data['category'], data['name'], data['version'], data['versiontag']) # look for atom in client database package_ids = inst_repo.getPackageIds(atom) old_package_ids = sorted(package_ids) try: _package_id = old_package_ids.pop() data['revision'] = inst_repo.retrieveRevision( _package_id) except IndexError: data['revision'] = etpConst['spmetprev'] # cleanup stale info if "original_repository" in data: del data['original_repository'] new_package_id = inst_repo.handlePackage( data, revision = data['revision']) if spm_wanted.is_user_selected(_spm_package): source = etpConst['install_sources']['user'] else: source = etpConst['install_sources']['automatic_dependency'] inst_repo.storeInstalledPackage( new_package_id, etpConst['spmdbid'], source) inst_repo.commit() try: os.remove(tmp_path) except OSError: pass entropy_client.output( darkgreen(_("Update complete")), importance=1, header=brown(" @@ ")) return 0
def _generate(self, entropy_client, inst_repo): """ Solo Smart Generate command. """ mytxt = "%s: %s" % ( brown(_("Attention")), darkred(_("the Installed Packages repository " "will be re-generated using the " "Source Package Manager")), ) entropy_client.output( mytxt, level="warning", importance=1) mytxt = "%s: %s" % ( brown(_("Attention")), darkred(_("I am not joking, this is quite disruptive")), ) entropy_client.output( mytxt, level="warning", importance=1) rc = entropy_client.ask_question( " %s" % (_("Understood ?"),)) if rc == _("No"): return 1 rc = entropy_client.ask_question( " %s" % (_("Really ?"),) ) if rc == _("No"): return 1 rc = entropy_client.ask_question( " %s. %s" % ( _("This is your last chance"), _("Ok?"),) ) if rc == _("No"): return 1 # clean caches spm = entropy_client.Spm() entropy_client.clear_cache() # try to get a list of current package ids, if possible try: package_ids = inst_repo.listAllPackageIds() except Exception as err: entropy.tools.print_traceback() entropy_client.output( "%s: %s" % ( darkred(_("Cannot read metadata")), err, ), level="warning" ) package_ids = [] # try to collect current installed revisions if possible # and do the same for digest revisions_match = {} digest_match = {} for package_id in package_ids: try: atom = inst_repo.retrieveAtom( package_id) revisions_match[atom] = inst_repo.retrieveRevision( package_id) digest_match[atom] = inst_repo.retrieveDigest( package_id) except Exception as err: entropy.tools.print_traceback() entropy_client.output( "%s: %s" % ( darkred(_("Cannot read metadata")), err, ), level="warning" ) repo_path = entropy_client.installed_repository_path() entropy_client.output( darkgreen(_("Creating a backup of the current repository")), level="info", importance=1, header=darkred(" @@ ")) entropy_client.output( repo_path, header=" ") inst_repo.commit() backed_up, msg = self._backup_repository( entropy_client, inst_repo, repo_path) if not backed_up: mytxt = "%s: %s" % ( darkred(_("Cannot backup the repository")), brown("%s" % msg),) entropy_client.output( mytxt, level="error", importance=1, header=darkred(" @@ ")) return 1 entropy_client.close_installed_repository() # repository will be re-opened automagically # at the next access. try: os.remove(repo_path) except OSError as err: if err.errno != errno.ENOENT: mytxt = "%s: %s" % ( purple(_("Cannot delete old repository")), brown("%s" % err),) entropy_client.output( mytxt, level="warning", importance=1, header=darkred(" @@ ")) return 1 entropy_client.output( purple(_("Initializing a new repository")), importance=1, header=darkred(" @@ ")) entropy_client.output( brown(repo_path), header=" ") # open a repository at the old path, if repo_path is # not in place, Entropy will forward us to the in-RAM # database (for sqlite), which is not what we want. gen_repo = entropy_client.open_generic_repository( repo_path, dbname=InstalledPackagesRepository.NAME, xcache=False, skip_checks=True) gen_repo.initializeRepository() gen_repo.commit() gen_repo.close() entropy_client.reopen_installed_repository() inst_repo = entropy_client.installed_repository() # Sanity check: make sure we're not accidentally using the in-RAM db if not os.path.exists(repo_path): entropy_client.output( darkred(_("Repository creation failed")), level="error", importance=1, header=darkred(" @@ ")) return 1 entropy_client.output( purple(_("Repository initialized, generating metadata")), importance=1, header=darkred(" @@ ")) spm_packages = spm.get_installed_packages() total = len(spm_packages) count = 0 # perf: reuse temp file tmp_fd, tmp_path = const_mkstemp( prefix="equo.rescue.generate") os.close(tmp_fd) for spm_package in spm_packages: count += 1 # make sure the file is empty with open(tmp_path, "w") as tmp_f: tmp_f.flush() entropy_client.output( teal(spm_package), count=(count, total), back=True, header=brown(" @@ ")) appended = spm.append_metadata_to_package( spm_package, tmp_path) if not appended: entropy_client.output( "%s: %s" % ( purple(_("Invalid package")), teal(spm_package),), importance=1, header=darkred(" @@ ")) continue try: data = spm.extract_package_metadata(tmp_path) except Exception as err: entropy.tools.print_traceback() entropy_client.output( "%s, %s: %s" % ( teal(spm_package), purple(_("Metadata generation error")), err, ), level="warning", importance=1, header=darkred(" @@ ") ) continue # Try to see if it's possible to use # the revision of a possible old db data['revision'] = etpConst['spmetprev'] # create atom string atom = entropy.dep.create_package_atom_string( data['category'], data['name'], data['version'], data['versiontag']) # now see if a revision is available saved_rev = revisions_match.get(atom) if saved_rev is not None: saved_rev = saved_rev data['revision'] = saved_rev # set digest to "0" to disable entropy dependencies # calculation check that forces the pkg to # be pulled in if digest differs from the one on the repo saved_digest = digest_match.get(atom, "0") data['digest'] = saved_digest package_id = inst_repo.addPackage(data, revision = data['revision']) inst_repo.storeInstalledPackage(package_id, etpConst['spmdbid']) inst_repo.commit() try: os.remove(tmp_path) except OSError: pass entropy_client.output( purple(_("Indexing metadata, please wait...")), header=darkgreen(" @@ "), back=True ) inst_repo.createAllIndexes() entropy_client.output( purple(_("Repository metadata generation complete")), header=darkgreen(" @@ ") ) return 0