Esempio n. 1
0
def do_staging(self, subcmd, opts, *args):
    """${cmd_name}: Commands to work with staging projects

    ${cmd_option_list}

    "accept" will accept all requests in
        $PROJECT:Staging:<LETTER> into $PROJECT
        If openSUSE:* project, requests marked ready from adi stagings will also
        be accepted.

    "acheck" will check if it is safe to accept new staging projects
        As $PROJECT is syncing the right package versions between
        /standard, /totest and /snapshot, it is important that the projects
        are clean prior to a checkin round.

    "adi" will list already staged requests, stage new requests, and supersede
        requests where applicable. New adi stagings will be created for new
        packages based on the grouping options used. The default grouping is by
        source project. When adi stagings are ready the request will be marked
        ready, unstaged, and the adi staging deleted.

    "check" will check if all packages are links without changes

    "check_duplicate_binaries" list binaries provided by multiple packages

    "config" will modify or view staging specific configuration

        Target project OSRT:Config attribute configuration applies to all
        stagings. Both configuration locations follow the .oscrc format (space
        separated list).

        config
            Print all staging configuration.
        config key
            Print the value of key for stagings.
        conf key value...
            Set the value of key for stagings.
        config --clear
            Clear all staging configuration.
        config --clear key
            Clear (unset) a single key from staging configuration
        config --append key value...
            Append value to existing value or set if no existing value.

        All of the above may be restricted to a set of stagings.

        The staging configuration is automatically cleared anytime staging
        psuedometa is cleared (accept, or unstage all requests).

        The keys that may be set in staging configuration are:

        - repo_checker-binary-whitelist[-arch]: appended to target project list
        - todo: text to be printed after staging is accepted

    "cleanup_rings" will try to cleanup rings content and print
        out problems

    "freeze" will freeze the sources of the project's links while not
        affecting the source packages

    "frozenage" will show when the respective staging project was last frozen

    "ignore" will ignore a request from "list" and "adi" commands until unignored

    "unignore" will remove from requests from ignore list
        If the --cleanup flag is included then all ignored requests that were
        changed from state new or review more than 3 days ago will be removed.

    "list" will list/supersede requests for ring packages or all if no rings.

    "lock" acquire a hold on the project in order to execute multiple commands
        and prevent others from interrupting. An example:

        lock -m "checkin round"

        list --supersede
        adi
        accept A B C D E

        unlock

        Each command will update the lock to keep it up-to-date.

    "repair" will attempt to repair the state of a request that has been
        corrupted.

        Use the --cleanup flag to include all untracked requests.

    "select" will add requests to the project
        Stagings are expected to be either in short-hand or the full project
        name. For example letter or named stagings can be specified simply as
        A, B, Gcc6, etc, while adi stagings can be specified as adi:1, adi:2,
        etc. Currently, adi stagings are not supported in proposal mode.

        Requests may either be the target package or the request ID.

        When using --filter-by or --group-by the xpath will be applied to the
        request node as returned by OBS. Use the following on a current request
        to see the XML structure.

        osc api /request/1337

        A number of additional values will supplement the normal request node.

        - ./action/target/@devel_project: the devel project for the package
        - ./action/target/@devel_project_super: super devel project if relevant
        - ./action/target/@ring: the ring to which the package belongs
        - ./@aged: either True or False based on splitter-request-age-threshold
        - ./@nonfree: set to nonfree if targetting nonfree sub project
        - ./@ignored: either False or the provided message

        Some useful examples:

        --filter-by './action/target[starts-with(@package, "yast-")]'
        --filter-by './action/target/[@devel_project="YaST:Head"]'
        --filter-by './action/target[starts-with(@ring, "1")]'
        --filter-by '@id!="1234567"'
        --filter-by 'contains(description, "#Portus")'

        --group-by='./action/target/@devel_project'
        --group-by='./action/target/@ring'

        Multiple filter-by or group-by options may be used at the same time.

        Note that when using proposal mode, multiple stagings to consider may be
        provided in addition to a list of requests by which to filter. A more
        complex example:

        select --group-by='./action/target/@devel_project' A B C 123 456 789

        This will separate the requests 123, 456, 789 by devel project and only
        consider stagings A, B, or C, if available, for placement.

        No arguments is also a valid choice and will propose all non-ignored
        requests into the first available staging. Note that bootstrapped
        stagings are only used when either required or no other stagings are
        available.

        Another useful example is placing all open requests into a specific
        letter staging with:

        select A

        Built in strategies may be specified as well. For example:

        select --strategy devel
        select --strategy quick
        select --strategy special
        select --strategy super

        The default is none and custom is used with any filter-by or group-by
        arguments are provided.

        To merge applicable requests into an existing staging.

        select --merge A

        To automatically try all available strategies.

        select --try-strategies

        These concepts can be combined and interactive mode allows the proposal
        to be modified before it is executed.

        Moving requests can be accomplished using the --move flag. For example,
        to move already staged pac1 and pac2 to staging B use the following.

        select --move B pac1 pac2

        The staging in which the requests are staged will automatically be
        determined and the requests will be removed from that staging and placed
        in the specified staging.

        Related to this, the --filter-from option may be used in conjunction
        with --move to only move requests already staged in a specific staging.
        This can be useful if a staging master is responsible for a specific set
        of packages and wants to move them into a different staging when they
        were already placed in a mixed staging. For example, if one had a file
        with a list of packages the following would move any of them found in
        staging A to staging B.

        select --move --filter-from A B $(< package.list)

    "unselect" will remove from the project - pushing them back to the backlog
        If a message is included the requests will be ignored first.

        Use the --cleanup flag to include all obsolete requests.

    "unlock" will remove the staging lock in case it gets stuck or a manual hold
        If a command lock gets stuck while a hold is placed on a project the
        unlock command will need to be run twice since there are two layers of
        locks.

    "rebuild" will rebuild broken packages in the given stagings or all
        The rebuild command will only trigger builds for packages with less than
        3 failures since the last success or if the build log indicates a stall.

        If the force option is included the rebuild checks will be ignored and
        all packages failing to build will be triggered.

    "setprio" will set priority of requests withing stagings
        If no stagings are specified all stagings will be used.
        The default priority is important, but the possible values are:
          "critical", "important", "moderate" or "low".

    "supersede" will supersede requests were applicable.
        A request list can be used to limit what is superseded.

    Usage:
        osc staging accept [--force] [--no-cleanup] [LETTER...]
        osc staging acheck
        osc staging adi [--move] [--by-develproject] [--split] [REQUEST...]
        osc staging check [--old] [STAGING...]
        osc staging check_duplicate_binaries
        osc staging config [--append] [--clear] [STAGING...] [key] [value]
        osc staging cleanup_rings
        osc staging freeze [--no-bootstrap] STAGING...
        osc staging frozenage [STAGING...]
        osc staging ignore [-m MESSAGE] REQUEST...
        osc staging unignore [--cleanup] [REQUEST...|all]
        osc staging list [--supersede]
        osc staging lock [-m MESSAGE]
        osc staging select [--no-freeze] [--move [--filter-from STAGING]]
            [--add PACKAGE]
            STAGING REQUEST...
        osc staging select [--no-freeze] [--interactive|--non-interactive]
            [--filter-by...] [--group-by...]
            [--merge] [--try-strategies] [--strategy]
            [STAGING...] [REQUEST...]
        osc staging unselect [--cleanup] [-m MESSAGE] [REQUEST...]
        osc staging unlock
        osc staging rebuild [--force] [STAGING...]
        osc staging repair [--cleanup] [REQUEST...]
        osc staging setprio [STAGING...] [priority]
        osc staging supersede [REQUEST...]
    """
    if opts.version:
        self._print_version()

    # verify the argument counts match the commands
    if len(args) == 0:
        raise oscerr.WrongArgs('No command given, see "osc help staging"!')
    cmd = args[0]
    if cmd in (
        'accept',
        'adi',
        'check',
        'config',
        'frozenage',
        'unignore',
        'select',
        'unselect',
        'rebuild',
        'repair',
        'setprio',
        'supersede',
    ):
        min_args, max_args = 0, None
    elif cmd in (
        'freeze',
        'ignore',
    ):
        min_args, max_args = 1, None
    elif cmd in (
        'acheck',
        'check_duplicate_binaries',
        'cleanup_rings',
        'list',
        'lock',
        'unlock',
    ):
        min_args, max_args = 0, 0
    else:
        raise oscerr.WrongArgs('Unknown command: %s' % cmd)
    args = clean_args(args)
    if len(args) - 1 < min_args:
        raise oscerr.WrongArgs('Too few arguments.')
    if max_args is not None and len(args) - 1 > max_args:
        raise oscerr.WrongArgs('Too many arguments.')

    # Allow for determining project from osc store.
    if not opts.project:
        if core.is_project_dir('.'):
            opts.project = core.store_read_project('.')
        else:
            opts.project = 'Factory'

    # Cache the remote config fetch.
    Cache.init()

    # Init the OBS access and configuration
    opts.project = self._full_project_name(opts.project)
    opts.apiurl = self.get_api_url()
    opts.verbose = False
    Config(opts.apiurl, opts.project)

    colorama.init(autoreset=True,
        strip=(opts.no_color or not bool(int(conf.config.get('staging.color', True)))))
    # Allow colors to be changed.
    for name in dir(Fore):
        if not name.startswith('_'):
            # .oscrc requires keys to be lower-case.
            value = conf.config.get('staging.color.' + name.lower())
            if value:
                setattr(Fore, name, ansi.code_to_chars(value))

    if opts.wipe_cache:
        Cache.delete_all()

    api = StagingAPI(opts.apiurl, opts.project)
    needed = lock_needed(cmd, opts)
    with OBSLock(opts.apiurl, opts.project, reason=cmd, needed=needed) as lock:

        # call the respective command and parse args by need
        if cmd == 'check':
            if len(args) == 1:
                CheckCommand(api).perform(None, opts.old)
            else:
                for prj in args[1:]:
                    CheckCommand(api).perform(prj, opts.old)
                    print()
        elif cmd == 'check_duplicate_binaries':
            CheckDuplicateBinariesCommand(api).perform(opts.save)
        elif cmd == 'config':
            projects = set()
            key = value = None
            stagings = api.get_staging_projects_short(None) + \
                       api.get_staging_projects()
            for arg in args[1:]:
                if arg in stagings:
                    projects.add(api.prj_from_short(arg))
                elif key is None:
                    key = arg
                elif value is None:
                    value = arg
                else:
                    value += ' ' + arg

            if not len(projects):
                projects = api.get_staging_projects()

            ConfigCommand(api).perform(projects, key, value, opts.append, opts.clear)
        elif cmd == 'freeze':
            for prj in args[1:]:
                prj = api.prj_from_short(prj)
                print(Fore.YELLOW + prj)
                FreezeCommand(api).perform(prj, copy_bootstrap=opts.bootstrap)
        elif cmd == 'frozenage':
            projects = api.get_staging_projects_short() if len(args) == 1 else args[1:]
            for prj in projects:
                prj = api.prj_from_letter(prj)
                print('{} last frozen {}{:.1f} days ago'.format(
                    Fore.YELLOW + prj + Fore.RESET,
                    Fore.GREEN if api.prj_frozen_enough(prj) else Fore.RED,
                    api.days_since_last_freeze(prj)))
        elif cmd == 'acheck':
            # Is it safe to accept? Meaning: /totest contains what it should and is not dirty
            version_totest = api.get_binary_version(api.project, "openSUSE-release.rpm", repository="totest", arch="x86_64")
            if version_totest:
                version_openqa = api.pseudometa_file_load('version_totest')
                totest_dirty = api.is_repo_dirty(api.project, 'totest')
                print("version_openqa: %s / version_totest: %s / totest_dirty: %s\n" % (version_openqa, version_totest, totest_dirty))
            else:
                print("acheck is unavailable in %s!\n" % (api.project))
        elif cmd == 'accept':
            # Is it safe to accept? Meaning: /totest contains what it should and is not dirty
            version_totest = api.get_binary_version(api.project, "openSUSE-release.rpm", repository="totest", arch="x86_64")

            if version_totest is None or opts.force:
                # SLE does not have a totest_version or openqa_version - ignore it
                version_openqa = version_totest
                totest_dirty   = False
            else:
                version_openqa = api.pseudometa_file_load('version_totest')
                totest_dirty   = api.is_repo_dirty(api.project, 'totest')

            if version_openqa == version_totest and not totest_dirty:
                cmd = AcceptCommand(api)
                for prj in args[1:]:
                    if cmd.perform(api.prj_from_letter(prj), opts.force):
                        cmd.reset_rebuild_data(prj)
                    else:
                        return
                    if not opts.no_cleanup:
                        if api.item_exists(api.prj_from_letter(prj)):
                            cmd.cleanup(api.prj_from_letter(prj))
                cmd.accept_other_new()
                if opts.project.startswith('openSUSE:'):
                    cmd.update_factory_version()
                    if api.item_exists(api.crebuild):
                        cmd.sync_buildfailures()
            else:
                print("Not safe to accept: /totest is not yet synced")
        elif cmd == 'unselect':
            if opts.message:
                print('Ignoring requests first')
                IgnoreCommand(api).perform(args[1:], opts.message)
            UnselectCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'select':
            # Include list of all stagings in short-hand and by full name.
            existing_stagings = api.get_staging_projects_short(None)
            existing_stagings += api.get_staging_projects()
            stagings = []
            requests = []
            for arg in args[1:]:
                # Since requests may be given by either request ID or package
                # name and stagings may include multi-letter special stagings
                # there is no easy way to distinguish between stagings and
                # requests in arguments. Therefore, check if argument is in the
                # list of short-hand and full project name stagings, otherwise
                # consider it a request. This also allows for special stagings
                # with the same name as package, but the staging will be assumed
                # first time around. The current practice seems to be to start a
                # special staging with a capital letter which makes them unique.
                # lastly adi stagings are consistently prefix with adi: which
                # also makes it consistent to distinguish them from request IDs.
                if arg in existing_stagings and arg not in stagings:
                    stagings.append(api.extract_staging_short(arg))
                elif arg not in requests:
                    requests.append(arg)

            if len(stagings) != 1 or len(requests) == 0 or opts.filter_by or opts.group_by:
                if opts.move or opts.filter_from:
                    print('--move and --filter-from must be used with explicit staging and request list')
                    return

                open_requests = api.get_open_requests({'withhistory': 1}, include_nonfree=False)
                if len(open_requests) == 0:
                    print('No open requests to consider')
                    return

                splitter = RequestSplitter(api, open_requests, in_ring=True)

                considerable = splitter.stagings_load(stagings)
                if considerable == 0:
                    print('No considerable stagings on which to act')
                    return

                if opts.merge:
                    splitter.merge()
                if opts.try_strategies:
                    splitter.strategies_try()
                if len(requests) > 0:
                    splitter.strategy_do('requests', requests=requests)
                if opts.strategy:
                    splitter.strategy_do(opts.strategy)
                elif opts.filter_by or opts.group_by:
                    kwargs = {}
                    if opts.filter_by:
                        kwargs['filters'] = opts.filter_by
                    if opts.group_by:
                        kwargs['groups'] = opts.group_by
                    splitter.strategy_do('custom', **kwargs)
                else:
                    if opts.merge:
                        # Merge any none strategies before final none strategy.
                        splitter.merge(strategy_none=True)
                    splitter.strategy_do('none')
                    splitter.strategy_do_non_bootstrapped('none')

                proposal = splitter.proposal
                if len(proposal) == 0:
                    print('Empty proposal')
                    return

                if opts.interactive:
                    with tempfile.NamedTemporaryFile(suffix='.yml') as temp:
                        temp.write(yaml.safe_dump(splitter.proposal, default_flow_style=False) + '\n\n')

                        if len(splitter.requests):
                            temp.write('# remaining requests:\n')
                            for request in splitter.requests:
                                temp.write('#    {}: {}\n'.format(
                                    request.get('id'), request.find('action/target').get('package')))
                            temp.write('\n')

                        temp.write('# move requests between stagings or comment/remove them\n')
                        temp.write('# change the target staging for a group\n')
                        temp.write('# remove the group, requests, staging, or strategy to skip\n')
                        temp.write('# stagings\n')
                        if opts.merge:
                            temp.write('# - mergeable: {}\n'
                                       .format(', '.join(sorted(splitter.stagings_mergeable +
                                                                splitter.stagings_mergeable_none))))
                        temp.write('# - considered: {}\n'
                                   .format(', '.join(sorted(splitter.stagings_considerable))))
                        temp.write('# - remaining: {}\n'
                                   .format(', '.join(sorted(splitter.stagings_available))))
                        temp.flush()

                        editor = os.getenv('EDITOR')
                        if not editor:
                            editor = 'xdg-open'
                        return_code = subprocess.call(editor.split(' ') + [temp.name])

                        proposal = yaml.safe_load(open(temp.name).read())

                        # Filter invalidated groups from proposal.
                        keys = ['group', 'requests', 'staging', 'strategy']
                        for group, info in sorted(proposal.items()):
                            for key in keys:
                                if not info.get(key):
                                    del proposal[group]
                                    break

                print(yaml.safe_dump(proposal, default_flow_style=False))

                print('Accept proposal? [y/n] (y): ', end='')
                if opts.non_interactive:
                    print('y')
                else:
                    response = input().lower()
                    if response != '' and response != 'y':
                        print('Quit')
                        return

                for group, info in sorted(proposal.items()):
                    print('Staging {} in {}'.format(group, info['staging']))

                    # SelectCommand expects strings.
                    request_ids = map(str, info['requests'].keys())
                    target_project = api.prj_from_short(info['staging'])

                    if 'merge' not in info:
                        # Assume that the original splitter_info is desireable
                        # and that this staging is simply manual followup.
                        api.set_splitter_info_in_prj_pseudometa(target_project, info['group'], info['strategy'])

                    SelectCommand(api, target_project) \
                        .perform(request_ids, no_freeze=opts.no_freeze)
            else:
                target_project = api.prj_from_short(stagings[0])
                if opts.add:
                    api.mark_additional_packages(target_project, [opts.add])
                else:
                    filter_from = api.prj_from_short(opts.filter_from) if opts.filter_from else None
                    SelectCommand(api, target_project) \
                        .perform(requests, opts.move,
                                 filter_from, opts.no_freeze)
        elif cmd == 'cleanup_rings':
            CleanupRings(api).perform()
        elif cmd == 'ignore':
            IgnoreCommand(api).perform(args[1:], opts.message)
        elif cmd == 'unignore':
            UnignoreCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'list':
            ListCommand(api).perform(supersede=opts.supersede)
        elif cmd == 'lock':
            lock.hold(opts.message)
        elif cmd == 'adi':
            AdiCommand(api).perform(args[1:], move=opts.move, by_dp=opts.by_develproject, split=opts.split)
        elif cmd == 'rebuild':
            RebuildCommand(api).perform(args[1:], opts.force)
        elif cmd == 'repair':
            RepairCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'setprio':
            stagings = []
            priority = None

            priorities = ['critical', 'important', 'moderate', 'low']
            for arg in args[1:]:
                if arg in priorities:
                    priority = arg
                else:
                    stagings.append(arg)

            PrioCommand(api).perform(stagings, priority)
        elif cmd == 'supersede':
            SupersedeCommand(api).perform(args[1:])
        elif cmd == 'unlock':
            lock.release(force=True)
Esempio n. 2
0
def do_staging(self, subcmd, opts, *args):
    """${cmd_name}: Commands to work with staging projects

    ${cmd_option_list}

    "accept" will accept all requests in
        $PROJECT:Staging:<LETTER> into $PROJECT
        If openSUSE:* project, requests marked ready from adi stagings will also
        be accepted.

    "acheck" will check if it is safe to accept new staging projects
        As $PROJECT is syncing the right package versions between
        /standard, /totest and /snapshot, it is important that the projects
        are clean prior to a checkin round.

    "adi" will list already staged requests, stage new requests, and supersede
        requests where applicable. New adi stagings will be created for new
        packages based on the grouping options used. The default grouping is by
        source project. When adi stagings are ready the request will be marked
        ready, unstaged, and the adi staging deleted.

    "check" will check if all packages are links without changes

    "check_duplicate_binaries" list binaries provided by multiple packages

    "config" will modify or view staging specific configuration

        Target project OSRT:Config attribute configuration applies to all
        stagings. Both configuration locations follow the .oscrc format (space
        separated list).

        config
            Print all staging configuration.
        config key
            Print the value of key for stagings.
        conf key value...
            Set the value of key for stagings.
        config --clear
            Clear all staging configuration.
        config --clear key
            Clear (unset) a single key from staging configuration
        config --append key value...
            Append value to existing value or set if no existing value.

        All of the above may be restricted to a set of stagings.

        The staging configuration is automatically cleared anytime staging
        psuedometa is cleared (accept, or unstage all requests).

        The keys that may be set in staging configuration are:

        - repo_checker-binary-whitelist[-arch]: appended to target project list
        - todo: text to be printed after staging is accepted

    "cleanup_rings" will try to cleanup rings content and print
        out problems

    "freeze" will freeze the sources of the project's links while not
        affecting the source packages

    "frozenage" will show when the respective staging project was last frozen

    "ignore" will ignore a request from "list" and "adi" commands until unignored

    "unignore" will remove from requests from ignore list
        If the --cleanup flag is included then all ignored requests that were
        changed from state new or review more than 3 days ago will be removed.

    "list" will list/supersede requests for ring packages or all if no rings.

    "lock" acquire a hold on the project in order to execute multiple commands
        and prevent others from interrupting. An example:

        lock -m "checkin round"

        list --supersede
        adi
        accept A B C D E

        unlock

        Each command will update the lock to keep it up-to-date.

    "repair" will attempt to repair the state of a request that has been
        corrupted.

        Use the --cleanup flag to include all untracked requests.

    "select" will add requests to the project
        Stagings are expected to be either in short-hand or the full project
        name. For example letter or named stagings can be specified simply as
        A, B, Gcc6, etc, while adi stagings can be specified as adi:1, adi:2,
        etc. Currently, adi stagings are not supported in proposal mode.

        Requests may either be the target package or the request ID.

        When using --filter-by or --group-by the xpath will be applied to the
        request node as returned by OBS. Use the following on a current request
        to see the XML structure.

        osc api /request/1337

        A number of additional values will supplement the normal request node.

        - ./action/target/@devel_project: the devel project for the package
        - ./action/target/@devel_project_super: super devel project if relevant
        - ./action/target/@ring: the ring to which the package belongs
        - ./@aged: either True or False based on splitter-request-age-threshold
        - ./@nonfree: set to nonfree if targetting nonfree sub project
        - ./@ignored: either False or the provided message

        Some useful examples:

        --filter-by './action/target[starts-with(@package, "yast-")]'
        --filter-by './action/target/[@devel_project="YaST:Head"]'
        --filter-by './action/target[starts-with(@ring, "1")]'
        --filter-by '@id!="1234567"'
        --filter-by 'contains(description, "#Portus")'

        --group-by='./action/target/@devel_project'
        --group-by='./action/target/@ring'

        Multiple filter-by or group-by options may be used at the same time.

        Note that when using proposal mode, multiple stagings to consider may be
        provided in addition to a list of requests by which to filter. A more
        complex example:

        select --group-by='./action/target/@devel_project' A B C 123 456 789

        This will separate the requests 123, 456, 789 by devel project and only
        consider stagings A, B, or C, if available, for placement.

        No arguments is also a valid choice and will propose all non-ignored
        requests into the first available staging. Note that bootstrapped
        stagings are only used when either required or no other stagings are
        available.

        Another useful example is placing all open requests into a specific
        letter staging with:

        select A

        Built in strategies may be specified as well. For example:

        select --strategy devel
        select --strategy quick
        select --strategy special
        select --strategy super

        The default is none and custom is used with any filter-by or group-by
        arguments are provided.

        To merge applicable requests into an existing staging.

        select --merge A

        To automatically try all available strategies.

        select --try-strategies

        These concepts can be combined and interactive mode allows the proposal
        to be modified before it is executed.

        Moving requests can be accomplished using the --move flag. For example,
        to move already staged pac1 and pac2 to staging B use the following.

        select --move B pac1 pac2

        The staging in which the requests are staged will automatically be
        determined and the requests will be removed from that staging and placed
        in the specified staging.

        Related to this, the --filter-from option may be used in conjunction
        with --move to only move requests already staged in a specific staging.
        This can be useful if a staging master is responsible for a specific set
        of packages and wants to move them into a different staging when they
        were already placed in a mixed staging. For example, if one had a file
        with a list of packages the following would move any of them found in
        staging A to staging B.

        select --move --filter-from A B $(< package.list)

    "unselect" will remove from the project - pushing them back to the backlog
        If a message is included the requests will be ignored first.

        Use the --cleanup flag to include all obsolete requests.

    "unlock" will remove the staging lock in case it gets stuck or a manual hold
        If a command lock gets stuck while a hold is placed on a project the
        unlock command will need to be run twice since there are two layers of
        locks.

    "rebuild" will rebuild broken packages in the given stagings or all
        The rebuild command will only trigger builds for packages with less than
        3 failures since the last success or if the build log indicates a stall.

        If the force option is included the rebuild checks will be ignored and
        all packages failing to build will be triggered.

    "setprio" will set priority of requests withing stagings
        If no stagings are specified all stagings will be used.
        The default priority is important, but the possible values are:
          "critical", "important", "moderate" or "low".

    "supersede" will supersede requests were applicable.
        A request list can be used to limit what is superseded.

    Usage:
        osc staging accept [--force] [--no-cleanup] [LETTER...]
        osc staging acheck
        osc staging adi [--move] [--by-develproject] [--split] [REQUEST...]
        osc staging check [--old] [STAGING...]
        osc staging check_duplicate_binaries
        osc staging config [--append] [--clear] [STAGING...] [key] [value]
        osc staging cleanup_rings
        osc staging freeze [--no-bootstrap] STAGING...
        osc staging frozenage [STAGING...]
        osc staging ignore [-m MESSAGE] REQUEST...
        osc staging unignore [--cleanup] [REQUEST...|all]
        osc staging list [--supersede]
        osc staging lock [-m MESSAGE]
        osc staging select [--no-freeze] [--move [--filter-from STAGING]]
            [--add PACKAGE]
            STAGING REQUEST...
        osc staging select [--no-freeze] [--interactive|--non-interactive]
            [--filter-by...] [--group-by...]
            [--merge] [--try-strategies] [--strategy]
            [STAGING...] [REQUEST...]
        osc staging unselect [--cleanup] [-m MESSAGE] [REQUEST...]
        osc staging unlock
        osc staging rebuild [--force] [STAGING...]
        osc staging repair [--cleanup] [REQUEST...]
        osc staging setprio [STAGING...] [priority]
        osc staging supersede [REQUEST...]
    """
    if opts.version:
        self._print_version()

    # verify the argument counts match the commands
    if len(args) == 0:
        raise oscerr.WrongArgs('No command given, see "osc help staging"!')
    cmd = args[0]
    if cmd in (
        'accept',
        'adi',
        'check',
        'config',
        'frozenage',
        'unignore',
        'select',
        'unselect',
        'rebuild',
        'repair',
        'setprio',
        'supersede',
    ):
        min_args, max_args = 0, None
    elif cmd in (
        'freeze',
        'ignore',
    ):
        min_args, max_args = 1, None
    elif cmd in (
        'acheck',
        'check_duplicate_binaries',
        'cleanup_rings',
        'list',
        'lock',
        'unlock',
    ):
        min_args, max_args = 0, 0
    else:
        raise oscerr.WrongArgs('Unknown command: %s' % cmd)
    args = clean_args(args)
    if len(args) - 1 < min_args:
        raise oscerr.WrongArgs('Too few arguments.')
    if max_args is not None and len(args) - 1 > max_args:
        raise oscerr.WrongArgs('Too many arguments.')

    # Allow for determining project from osc store.
    if not opts.project:
        if core.is_project_dir('.'):
            opts.project = core.store_read_project('.')
        else:
            opts.project = 'Factory'

    # Cache the remote config fetch.
    Cache.init()

    # Init the OBS access and configuration
    opts.project = self._full_project_name(opts.project)
    opts.apiurl = self.get_api_url()
    opts.verbose = False
    Config(opts.apiurl, opts.project)

    colorama.init(autoreset=True,
        strip=(opts.no_color or not bool(int(conf.config.get('staging.color', True)))))
    # Allow colors to be changed.
    for name in dir(Fore):
        if not name.startswith('_'):
            # .oscrc requires keys to be lower-case.
            value = conf.config.get('staging.color.' + name.lower())
            if value:
                setattr(Fore, name, ansi.code_to_chars(value))

    if opts.wipe_cache:
        Cache.delete_all()

    api = StagingAPI(opts.apiurl, opts.project)
    needed = lock_needed(cmd, opts)
    with OBSLock(opts.apiurl, opts.project, reason=cmd, needed=needed) as lock:

        # call the respective command and parse args by need
        if cmd == 'check':
            if len(args) == 1:
                CheckCommand(api).perform(None, opts.old)
            else:
                for prj in args[1:]:
                    CheckCommand(api).perform(prj, opts.old)
                    print()
        elif cmd == 'check_duplicate_binaries':
            CheckDuplicateBinariesCommand(api).perform(opts.save)
        elif cmd == 'config':
            projects = set()
            key = value = None
            stagings = api.get_staging_projects_short(None) + \
                       api.get_staging_projects()
            for arg in args[1:]:
                if arg in stagings:
                    projects.add(api.prj_from_short(arg))
                elif key is None:
                    key = arg
                elif value is None:
                    value = arg
                else:
                    value += ' ' + arg

            if not len(projects):
                projects = api.get_staging_projects()

            ConfigCommand(api).perform(projects, key, value, opts.append, opts.clear)
        elif cmd == 'freeze':
            for prj in args[1:]:
                prj = api.prj_from_short(prj)
                print(Fore.YELLOW + prj)
                FreezeCommand(api).perform(prj, copy_bootstrap=opts.bootstrap)
        elif cmd == 'frozenage':
            projects = api.get_staging_projects_short() if len(args) == 1 else args[1:]
            for prj in projects:
                prj = api.prj_from_letter(prj)
                print('{} last frozen {}{:.1f} days ago'.format(
                    Fore.YELLOW + prj + Fore.RESET,
                    Fore.GREEN if api.prj_frozen_enough(prj) else Fore.RED,
                    api.days_since_last_freeze(prj)))
        elif cmd == 'acheck':
            # Is it safe to accept? Meaning: /totest contains what it should and is not dirty
            version_totest = api.get_binary_version(api.project, "openSUSE-release.rpm", repository="totest", arch="x86_64")
            if version_totest:
                version_openqa = api.pseudometa_file_load('version_totest')
                totest_dirty = api.is_repo_dirty(api.project, 'totest')
                print("version_openqa: %s / version_totest: %s / totest_dirty: %s\n" % (version_openqa, version_totest, totest_dirty))
            else:
                print("acheck is unavailable in %s!\n" % (api.project))
        elif cmd == 'accept':
            # Is it safe to accept? Meaning: /totest contains what it should and is not dirty
            version_totest = api.get_binary_version(api.project, "openSUSE-release.rpm", repository="totest", arch="x86_64")

            if version_totest is None or opts.force:
                # SLE does not have a totest_version or openqa_version - ignore it
                version_openqa = version_totest
                totest_dirty   = False
            else:
                version_openqa = api.pseudometa_file_load('version_totest')
                totest_dirty   = api.is_repo_dirty(api.project, 'totest')

            if version_openqa == version_totest and not totest_dirty:
                cmd = AcceptCommand(api)
                for prj in args[1:]:
                    if cmd.perform(api.prj_from_letter(prj), opts.force):
                        cmd.reset_rebuild_data(prj)
                    else:
                        return
                    if not opts.no_cleanup:
                        if api.item_exists(api.prj_from_letter(prj)):
                            cmd.cleanup(api.prj_from_letter(prj))
                cmd.accept_other_new()
                if opts.project.startswith('openSUSE:'):
                    cmd.update_factory_version()
                    if api.item_exists(api.crebuild):
                        cmd.sync_buildfailures()
            else:
                print("Not safe to accept: /totest is not yet synced")
        elif cmd == 'unselect':
            if opts.message:
                print('Ignoring requests first')
                IgnoreCommand(api).perform(args[1:], opts.message)
            UnselectCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'select':
            # Include list of all stagings in short-hand and by full name.
            existing_stagings = api.get_staging_projects_short(None)
            existing_stagings += api.get_staging_projects()
            stagings = []
            requests = []
            for arg in args[1:]:
                # Since requests may be given by either request ID or package
                # name and stagings may include multi-letter special stagings
                # there is no easy way to distinguish between stagings and
                # requests in arguments. Therefore, check if argument is in the
                # list of short-hand and full project name stagings, otherwise
                # consider it a request. This also allows for special stagings
                # with the same name as package, but the staging will be assumed
                # first time around. The current practice seems to be to start a
                # special staging with a capital letter which makes them unique.
                # lastly adi stagings are consistently prefix with adi: which
                # also makes it consistent to distinguish them from request IDs.
                if arg in existing_stagings and arg not in stagings:
                    stagings.append(api.extract_staging_short(arg))
                elif arg not in requests:
                    requests.append(arg)

            if len(stagings) != 1 or len(requests) == 0 or opts.filter_by or opts.group_by:
                if opts.move or opts.filter_from:
                    print('--move and --filter-from must be used with explicit staging and request list')
                    return

                open_requests = api.get_open_requests({'withhistory': 1})
                if len(open_requests) == 0:
                    print('No open requests to consider')
                    return

                splitter = RequestSplitter(api, open_requests, in_ring=True)

                considerable = splitter.stagings_load(stagings)
                if considerable == 0:
                    print('No considerable stagings on which to act')
                    return

                if opts.merge:
                    splitter.merge()
                if opts.try_strategies:
                    splitter.strategies_try()
                if len(requests) > 0:
                    splitter.strategy_do('requests', requests=requests)
                if opts.strategy:
                    splitter.strategy_do(opts.strategy)
                elif opts.filter_by or opts.group_by:
                    kwargs = {}
                    if opts.filter_by:
                        kwargs['filters'] = opts.filter_by
                    if opts.group_by:
                        kwargs['groups'] = opts.group_by
                    splitter.strategy_do('custom', **kwargs)
                else:
                    if opts.merge:
                        # Merge any none strategies before final none strategy.
                        splitter.merge(strategy_none=True)
                    splitter.strategy_do('none')
                    splitter.strategy_do_non_bootstrapped('none')

                proposal = splitter.proposal
                if len(proposal) == 0:
                    print('Empty proposal')
                    return

                if opts.interactive:
                    with tempfile.NamedTemporaryFile(suffix='.yml') as temp:
                        temp.write(yaml.safe_dump(splitter.proposal, default_flow_style=False) + '\n\n')

                        if len(splitter.requests):
                            temp.write('# remaining requests:\n')
                            for request in splitter.requests:
                                temp.write('#    {}: {}\n'.format(
                                    request.get('id'), request.find('action/target').get('package')))
                            temp.write('\n')

                        temp.write('# move requests between stagings or comment/remove them\n')
                        temp.write('# change the target staging for a group\n')
                        temp.write('# remove the group, requests, staging, or strategy to skip\n')
                        temp.write('# stagings\n')
                        if opts.merge:
                            temp.write('# - mergeable: {}\n'
                                       .format(', '.join(sorted(splitter.stagings_mergeable +
                                                                splitter.stagings_mergeable_none))))
                        temp.write('# - considered: {}\n'
                                   .format(', '.join(sorted(splitter.stagings_considerable))))
                        temp.write('# - remaining: {}\n'
                                   .format(', '.join(sorted(splitter.stagings_available))))
                        temp.flush()

                        editor = os.getenv('EDITOR')
                        if not editor:
                            editor = 'xdg-open'
                        return_code = subprocess.call(editor.split(' ') + [temp.name])

                        proposal = yaml.safe_load(open(temp.name).read())

                        # Filter invalidated groups from proposal.
                        keys = ['group', 'requests', 'staging', 'strategy']
                        for group, info in sorted(proposal.items()):
                            for key in keys:
                                if not info.get(key):
                                    del proposal[group]
                                    break

                print(yaml.safe_dump(proposal, default_flow_style=False))

                print('Accept proposal? [y/n] (y): ', end='')
                if opts.non_interactive:
                    print('y')
                else:
                    response = raw_input().lower()
                    if response != '' and response != 'y':
                        print('Quit')
                        return

                for group, info in sorted(proposal.items()):
                    print('Staging {} in {}'.format(group, info['staging']))

                    # SelectCommand expects strings.
                    request_ids = map(str, info['requests'].keys())
                    target_project = api.prj_from_short(info['staging'])

                    if 'merge' not in info:
                        # Assume that the original splitter_info is desireable
                        # and that this staging is simply manual followup.
                        api.set_splitter_info_in_prj_pseudometa(target_project, info['group'], info['strategy'])

                    SelectCommand(api, target_project) \
                        .perform(request_ids, no_freeze=opts.no_freeze)
            else:
                target_project = api.prj_from_short(stagings[0])
                if opts.add:
                    api.mark_additional_packages(target_project, [opts.add])
                else:
                    SelectCommand(api, target_project) \
                        .perform(requests, opts.move,
                                 api.prj_from_short(opts.filter_from), opts.no_freeze)
        elif cmd == 'cleanup_rings':
            CleanupRings(api).perform()
        elif cmd == 'ignore':
            IgnoreCommand(api).perform(args[1:], opts.message)
        elif cmd == 'unignore':
            UnignoreCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'list':
            ListCommand(api).perform(supersede=opts.supersede)
        elif cmd == 'lock':
            lock.hold(opts.message)
        elif cmd == 'adi':
            AdiCommand(api).perform(args[1:], move=opts.move, by_dp=opts.by_develproject, split=opts.split)
        elif cmd == 'rebuild':
            RebuildCommand(api).perform(args[1:], opts.force)
        elif cmd == 'repair':
            RepairCommand(api).perform(args[1:], opts.cleanup)
        elif cmd == 'setprio':
            stagings = []
            priority = None

            priorities = ['critical', 'important', 'moderate', 'low']
            for arg in args[1:]:
                if arg in priorities:
                    priority = arg
                else:
                    stagings.append(arg)

            PrioCommand(api).perform(stagings, priority)
        elif cmd == 'supersede':
            SupersedeCommand(api).perform(args[1:])
        elif cmd == 'unlock':
            lock.release(force=True)
class StagingHelper(object):
    def __init__(self, project):
        self.project = project
        self.apiurl = osc.conf.config['apiurl']
        Config(self.apiurl, self.project)
        self.api = StagingAPI(self.apiurl, self.project)

    def get_support_package_list(self, project, repository):
        f = osc.core.get_buildconfig(self.apiurl, project,
                                     repository).splitlines()
        pkg_list = []
        for line in f:
            if re.match('Preinstall', line) or re.match(
                    'VM[Ii]nstall', line) or re.match('Support', line):
                content = line.split(':')
                variables = [x.strip() for x in content[1].split(' ')]
                for var in variables:
                    if var != '' and var not in pkg_list:
                        if var.startswith('!') and var[1:] in pkg_list:
                            pkg_list.remove(var[1:])
                        else:
                            pkg_list.append(var)
        return pkg_list

    def get_project_binarylist(self, project, repository, arch):
        query = {'view': 'binaryversions'}
        root = ET.parse(
            http_GET(
                makeurl(self.apiurl, ['build', project, repository, arch],
                        query=query))).getroot()
        return root

    def process_project_binarylist(self, project, repository, arch):
        prj_binarylist = self.get_project_binarylist(project, repository, arch)
        files = {}
        for package in prj_binarylist.findall('./binaryversionlist'):
            for binary in package.findall('binary'):
                result = re.match(r'(.*)-([^-]*)-([^-]*)\.([^-\.]+)\.rpm',
                                  binary.attrib['name'])
                if not result:
                    continue
                bname = result.group(1)
                if bname.endswith('-debuginfo') or bname.endswith(
                        '-debuginfo-32bit'):
                    continue
                if bname.endswith('-debugsource'):
                    continue
                if bname.startswith('::import::'):
                    continue
                if result.group(4) == 'src':
                    continue
                files[bname] = package.attrib['package'].split(':', 1)[0]

        return files

    def check_multiple_specs(self, project, packages):
        expanded_packages = []

        for pkg in packages:
            query = {'expand': 1}
            url = makeurl(self.apiurl, ['source', project, pkg], query=query)
            try:
                root = ET.parse(http_GET(url)).getroot()
            except HTTPError as e:
                if e.code == 404:
                    continue
                raise
            for en in root.findall('entry'):
                if en.attrib['name'].endswith('.spec'):
                    expanded_packages.append(en.attrib['name'][:-5])

        return expanded_packages

    def crawl(self):
        """Main method"""
        rebuild_data = self.api.pseudometa_file_load('support_pkg_rebuild')
        if rebuild_data is None:
            print("There is no support_pkg_rebuild file!")
            return

        logging.info('Gathering support package list from %s' % self.project)
        support_pkgs = self.get_support_package_list(self.project, 'standard')
        files = self.process_project_binarylist(self.project, 'standard',
                                                'x86_64')
        staging_projects = [
            "%s:%s" % (self.api.cstaging, p)
            for p in self.api.get_staging_projects_short()
        ]
        cand_sources = defaultdict(list)
        for stg in staging_projects:
            prj_meta = self.api.get_prj_pseudometa(stg)
            prj_staged_packages = [
                req['package'] for req in prj_meta['requests']
            ]
            prj_expanded_packages = self.check_multiple_specs(
                self.project, prj_staged_packages)
            for pkg in support_pkgs:
                if files.get(pkg) and files.get(pkg) in prj_expanded_packages:
                    if files.get(pkg) not in cand_sources[stg]:
                        cand_sources[stg].append(files.get(pkg))

        root = ET.fromstring(rebuild_data)

        logging.info('Checking rebuild data...')

        for stg in root.findall('staging'):
            rebuild = stg.find('rebuild').text
            suppkg_list = stg.find('supportpkg').text
            need_rebuild = False
            suppkgs = []
            if suppkg_list:
                suppkgs = suppkg_list.split(',')

            stgname = stg.get('name')
            if len(cand_sources[stgname]) and rebuild == 'unknown':
                need_rebuild = True
                stg.find('rebuild').text = 'needed'
                new_suppkg_list = ','.join(cand_sources[stgname])
                stg.find('supportpkg').text = new_suppkg_list
            elif len(cand_sources[stgname]) and rebuild != 'unknown':
                for cand in cand_sources[stgname]:
                    if cand not in suppkgs:
                        need_rebuild = True
                        stg.find('rebuild').text = 'needed'
                        break
                new_suppkg_list = ','.join(cand_sources[stgname])
                stg.find('supportpkg').text = new_suppkg_list
            elif not len(cand_sources[stgname]):
                stg.find('rebuild').text = 'unneeded'
                stg.find('supportpkg').text = ''

            if stg.find('rebuild').text == 'needed':
                need_rebuild = True

            if need_rebuild and not self.api.is_repo_dirty(
                    stgname, 'standard'):
                logging.info('Rebuild %s' % stgname)
                osc.core.rebuild(self.apiurl, stgname, None, None, None)
                stg.find('rebuild').text = 'unneeded'

        logging.info('Updating support pkg list...')
        rebuild_data_updated = ET.tostring(root)
        logging.debug(rebuild_data_updated)
        if rebuild_data_updated != rebuild_data:
            self.api.pseudometa_file_save('support_pkg_rebuild',
                                          rebuild_data_updated,
                                          'support package rebuild')
Esempio n. 4
0
class ToTestManager(ToolBase.ToolBase):

    def __init__(self, tool):
        ToolBase.ToolBase.__init__(self)
        # copy attributes
        self.logger = logging.getLogger(__name__)
        self.apiurl = tool.apiurl
        self.debug = tool.debug
        self.caching = tool.caching
        self.dryrun = tool.dryrun

    def setup(self, project):
        self.project = ToTest(project, self.apiurl)
        self.api = StagingAPI(self.apiurl, project=project)

    def version_file(self, target):
        return 'version_%s' % target

    def write_version_to_dashboard(self, target, version):
        if self.dryrun or self.project.do_not_release:
            return
        self.api.pseudometa_file_ensure(self.version_file(target), version,
                                        comment='Update version')

    def current_qa_version(self):
        return self.api.pseudometa_file_load(self.version_file('totest'))

    def iso_build_version(self, project, tree, repo=None, arch=None):
        for binary in self.binaries_of_product(project, tree, repo=repo, arch=arch):
            result = re.match(r'.*-(?:Build|Snapshot)([0-9.]+)(?:-Media.*\.iso|\.docker\.tar\.xz|\.raw\.xz)', binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s iso version" % project)

    def version_from_totest_project(self):
        if len(self.project.main_products):
            return self.iso_build_version(self.project.test_project, self.project.main_products[0])

        return self.iso_build_version(self.project.test_project, self.project.image_products[0].package,
                                      arch=self.project.image_products[0].archs[0])

    def binaries_of_product(self, project, product, repo=None, arch=None):
        if repo is None:
            repo = self.project.product_repo
        if arch is None:
            arch = self.project.product_arch

        url = self.api.makeurl(['build', project, repo, arch, product])
        try:
            f = self.api.retried_GET(url)
        except HTTPError:
            return []

        ret = []
        root = ET.parse(f).getroot()
        for binary in root.findall('binary'):
            ret.append(binary.get('filename'))

        return ret

    def ftp_build_version(self, project, tree):
        for binary in self.binaries_of_product(project, tree):
            result = re.match(r'.*-Build(.*)-Media1.report', binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s ftp version" % project)

    # make sure to update the attribute as atomic as possible - as such
    # only update the snapshot and don't erase anything else. The snapshots
    # have very different update times within the pipeline, so there is
    # normally no chance that releaser and publisher overwrite states
    def update_status(self, status, snapshot):
        status_dict = self.get_status_dict()
        if self.dryrun:
            self.logger.info('setting {} snapshot to {}'.format(status, snapshot))
            return
        if status_dict.get(status) != snapshot:
            status_dict[status] = snapshot
            text = yaml.safe_dump(status_dict)
            self.api.attribute_value_save('ToTestManagerStatus', text)

    def get_status_dict(self):
        text = self.api.attribute_value_load('ToTestManagerStatus')
        if text:
            return yaml.safe_load(text)
        return dict()

    def get_status(self, status):
        return self.get_status_dict().get(status)

    def release_package(self, project, package, set_release=None, repository=None,
                         target_project=None, target_repository=None):
        query = {'cmd': 'release'}

        if set_release:
            query['setrelease'] = set_release

        if repository is not None:
            query['repository'] = repository

        if target_project is not None:
            # Both need to be set
            query['target_project'] = target_project
            query['target_repository'] = target_repository

        baseurl = ['source', project, package]

        url = self.api.makeurl(baseurl, query=query)
        if self.dryrun or self.project.do_not_release:
            self.logger.info('release %s/%s (%s)' % (project, package, query))
        else:
            self.api.retried_POST(url)

    def all_repos_done(self, project, codes=None):
        """Check the build result of the project and only return True if all
        repos of that project are either published or unpublished

        """

        # coolo's experience says that 'finished' won't be
        # sufficient here, so don't try to add it :-)
        codes = ['published', 'unpublished'] if not codes else codes

        url = self.api.makeurl(
            ['build', project, '_result'], {'code': 'failed'})
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        ready = True
        for repo in root.findall('result'):
            # ignore ports. 'factory' is used by arm for repos that are not
            # meant to use the totest manager.
            if repo.get('repository') in ('ports', 'factory', 'images_staging'):
                continue
            if repo.get('dirty') == 'true':
                self.logger.info('%s %s %s -> %s' % (repo.get('project'),
                                                repo.get('repository'), repo.get('arch'), 'dirty'))
                ready = False
            if repo.get('code') not in codes:
                self.logger.info('%s %s %s -> %s' % (repo.get('project'),
                                                repo.get('repository'), repo.get('arch'), repo.get('code')))
                ready = False
        return ready
Esempio n. 5
0
class ToTestManager(ToolBase.ToolBase):
    def __init__(self, tool):
        ToolBase.ToolBase.__init__(self)
        # copy attributes
        self.logger = logging.getLogger(__name__)
        self.apiurl = tool.apiurl
        self.debug = tool.debug
        self.caching = tool.caching
        self.dryrun = tool.dryrun

    def setup(self, project):
        self.project = ToTest(project, self.apiurl)
        self.api = StagingAPI(self.apiurl, project=project)

    def version_file(self, target):
        return 'version_%s' % target

    def write_version_to_dashboard(self, target, version):
        if self.dryrun or self.project.do_not_release:
            return
        self.api.pseudometa_file_ensure(self.version_file(target),
                                        version,
                                        comment='Update version')

    def current_qa_version(self):
        return self.api.pseudometa_file_load(self.version_file('totest'))

    def iso_build_version(self, project, tree, repo=None, arch=None):
        for binary in self.binaries_of_product(project,
                                               tree,
                                               repo=repo,
                                               arch=arch):
            result = re.match(
                r'.*-(?:Build|Snapshot)([0-9.]+)(?:-Media.*\.iso|\.docker\.tar\.xz|\.raw\.xz|\.appx)',
                binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s iso version" % project)

    def version_from_totest_project(self):
        if len(self.project.main_products):
            return self.iso_build_version(self.project.test_project,
                                          self.project.main_products[0])

        return self.iso_build_version(
            self.project.test_project,
            self.project.image_products[0].package,
            arch=self.project.image_products[0].archs[0])

    def binaries_of_product(self, project, product, repo=None, arch=None):
        if repo is None:
            repo = self.project.product_repo
        if arch is None:
            arch = self.project.product_arch

        url = self.api.makeurl(['build', project, repo, arch, product])
        try:
            f = self.api.retried_GET(url)
        except HTTPError:
            return []

        ret = []
        root = ET.parse(f).getroot()
        for binary in root.findall('binary'):
            ret.append(binary.get('filename'))

        return ret

    def ftp_build_version(self, project, tree):
        for binary in self.binaries_of_product(project, tree):
            result = re.match(r'.*-Build(.*)-Media1.report', binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s ftp version" % project)

    # make sure to update the attribute as atomic as possible - as such
    # only update the snapshot and don't erase anything else. The snapshots
    # have very different update times within the pipeline, so there is
    # normally no chance that releaser and publisher overwrite states
    def update_status(self, status, snapshot):
        status_dict = self.get_status_dict()
        if self.dryrun:
            self.logger.info('setting {} snapshot to {}'.format(
                status, snapshot))
            return
        if status_dict.get(status) != snapshot:
            status_dict[status] = snapshot
            text = yaml.safe_dump(status_dict)
            self.api.attribute_value_save('ToTestManagerStatus', text)

    def get_status_dict(self):
        text = self.api.attribute_value_load('ToTestManagerStatus')
        if text:
            return yaml.safe_load(text)
        return dict()

    def get_status(self, status):
        return self.get_status_dict().get(status)

    def release_package(self,
                        project,
                        package,
                        set_release=None,
                        repository=None,
                        target_project=None,
                        target_repository=None):
        query = {'cmd': 'release'}

        if set_release:
            query['setrelease'] = set_release

        if repository is not None:
            query['repository'] = repository

        if target_project is not None:
            # Both need to be set
            query['target_project'] = target_project
            query['target_repository'] = target_repository

        baseurl = ['source', project, package]

        url = self.api.makeurl(baseurl, query=query)
        if self.dryrun or self.project.do_not_release:
            self.logger.info('release %s/%s (%s)' % (project, package, query))
        else:
            self.api.retried_POST(url)

    def all_repos_done(self, project, codes=None):
        """Check the build result of the project and only return True if all
        repos of that project are either published or unpublished

        """

        # coolo's experience says that 'finished' won't be
        # sufficient here, so don't try to add it :-)
        codes = ['published', 'unpublished'] if not codes else codes

        url = self.api.makeurl(['build', project, '_result'],
                               {'code': 'failed'})
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        ready = True
        for repo in root.findall('result'):
            # ignore ports. 'factory' is used by arm for repos that are not
            # meant to use the totest manager.
            if repo.get('repository') in ('ports', 'factory',
                                          'images_staging'):
                continue
            if repo.get('dirty') == 'true':
                self.logger.info('%s %s %s -> %s' %
                                 (repo.get('project'), repo.get('repository'),
                                  repo.get('arch'), 'dirty'))
                ready = False
            if repo.get('code') not in codes:
                self.logger.info('%s %s %s -> %s' %
                                 (repo.get('project'), repo.get('repository'),
                                  repo.get('arch'), repo.get('code')))
                ready = False
        return ready
Esempio n. 6
0
class ToTestBase(object):
    """Base class to store the basic interface"""

    product_repo = 'images'
    product_arch = 'local'
    livecd_repo = 'images'
    livecd_archs = ['i586', 'x86_64']

    def __init__(self,
                 project,
                 dryrun=False,
                 norelease=False,
                 api_url=None,
                 openqa_server='https://openqa.opensuse.org',
                 test_subproject=None):
        self.project = project
        self.dryrun = dryrun
        self.norelease = norelease
        if not api_url:
            api_url = osc.conf.config['apiurl']
        self.api = StagingAPI(api_url, project=project)
        self.openqa_server = openqa_server
        if not test_subproject:
            test_subproject = 'ToTest'
        self.test_project = '%s:%s' % (self.project, test_subproject)
        self.openqa = OpenQA_Client(server=openqa_server)
        self.load_issues_to_ignore()
        self.project_base = project.split(':')[0]
        self.update_pinned_descr = False
        self.amqp_url = osc.conf.config.get('ttm_amqp_url')

    def load_issues_to_ignore(self):
        text = self.api.attribute_value_load('IgnoredIssues')
        if text:
            root = yaml.load(text)
            self.issues_to_ignore = root.get('last_seen')
        else:
            self.issues_to_ignore = dict()

    def save_issues_to_ignore(self):
        if self.dryrun:
            return
        text = yaml.dump({'last_seen': self.issues_to_ignore},
                         default_flow_style=False)
        self.api.attribute_value_save('IgnoredIssues', text)

    def openqa_group(self):
        return self.project

    def iso_prefix(self):
        return self.project

    def jobs_num(self):
        return 70

    def current_version(self):
        return self.release_version()

    def binaries_of_product(self, project, product):
        url = self.api.makeurl(
            ['build', project, self.product_repo, self.product_arch, product])
        try:
            f = self.api.retried_GET(url)
        except urllib2.HTTPError:
            return []

        ret = []
        root = ET.parse(f).getroot()
        for binary in root.findall('binary'):
            ret.append(binary.get('filename'))

        return ret

    def get_current_snapshot(self):
        """Return the current snapshot in the test project"""

        for binary in self.binaries_of_product(
                self.test_project,
                '000product:%s-cd-mini-%s' % (self.project_base, self.arch())):
            result = re.match(
                r'%s-%s-NET-.*-Snapshot(.*)-Media.iso' %
                (self.project_base, self.iso_prefix()), binary)
            if result:
                return result.group(1)

        return None

    def ftp_build_version(self, project, tree, base=None):
        if not base:
            base = self.project_base
        for binary in self.binaries_of_product(project, tree):
            result = re.match(r'%s.*Build(.*)-Media1.report' % base, binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s ftp version" % project)

    def iso_build_version(self, project, tree, base=None):
        if not base:
            base = self.project_base
        for binary in self.binaries_of_product(project, tree):
            result = re.match(
                r'.*-(?:Build|Snapshot)([0-9.]+)(?:-Media.*\.iso|\.docker\.tar\.xz)',
                binary)
            if result:
                return result.group(1)
        raise NotFoundException("can't find %s iso version" % project)

    def release_version(self):
        url = self.api.makeurl([
            'build', self.project, 'standard',
            self.arch(),
            '000product:%s-release' % self.project_base
        ])
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        for binary in root.findall('binary'):
            binary = binary.get('filename', '')
            result = re.match(r'.*-([^-]*)-[^-]*.src.rpm', binary)
            if result:
                return result.group(1)

        raise NotFoundException("can't find %s version" % self.project)

    def current_qa_version(self):
        return self.api.pseudometa_file_load('version_totest')

    def find_openqa_results(self, snapshot):
        """Return the openqa jobs of a given snapshot and filter out the
        cloned jobs

        """

        url = makeurl(self.openqa_server, ['api', 'v1', 'jobs'], {
            'group': self.openqa_group(),
            'build': snapshot,
            'latest': 1
        })
        f = self.api.retried_GET(url)
        jobs = []
        for job in json.load(f)['jobs']:
            if job['clone_id'] or job['result'] == 'obsoleted':
                continue
            job['name'] = job['name'].replace(snapshot, '')
            jobs.append(job)
        return jobs

    def _result2str(self, result):
        if result == QA_INPROGRESS:
            return 'inprogress'
        elif result == QA_FAILED:
            return 'failed'
        else:
            return 'passed'

    def find_failed_module(self, testmodules):
        # print json.dumps(testmodules, sort_keys=True, indent=4)
        for module in testmodules:
            if module['result'] != 'failed':
                continue
            flags = module['flags']
            if 'fatal' in flags or 'important' in flags:
                return module['name']
                break
            logger.info('%s %s %s' %
                        (module['name'], module['result'], module['flags']))

    def update_openqa_status_message(self):
        url = makeurl(self.openqa_server, ['api', 'v1', 'job_groups'])
        f = self.api.retried_GET(url)
        job_groups = json.load(f)
        group_id = 0
        for jg in job_groups:
            if jg['name'] == self.openqa_group():
                group_id = jg['id']
                break

        if not group_id:
            logger.debug(
                'No openQA group id found for status comment update, ignoring')
            return

        pinned_ignored_issue = 0
        issues = ' , '.join(self.issues_to_ignore.keys())
        status_flag = 'publishing' if self.status_for_openqa['is_publishing'] else \
            'preparing' if self.status_for_openqa['can_release'] else \
            'testing' if self.status_for_openqa['snapshotable'] else \
            'building'
        status_msg = "tag:{}:{}:{}".format(
            self.status_for_openqa['new_snapshot'], status_flag, status_flag)
        msg = "pinned-description: Ignored issues\r\n\r\n{}\r\n\r\n{}".format(
            issues, status_msg)
        data = {'text': msg}

        url = makeurl(self.openqa_server,
                      ['api', 'v1', 'groups',
                       str(group_id), 'comments'])
        f = self.api.retried_GET(url)
        comments = json.load(f)
        for comment in comments:
            if comment['userName'] == 'ttm' and \
                    comment['text'].startswith('pinned-description: Ignored issues'):
                pinned_ignored_issue = comment['id']

        logger.debug('Writing openQA status message: {}'.format(data))
        if not self.dryrun:
            if pinned_ignored_issue:
                self.openqa.openqa_request('PUT',
                                           'groups/%s/comments/%d' %
                                           (group_id, pinned_ignored_issue),
                                           data=data)
            else:
                self.openqa.openqa_request('POST',
                                           'groups/%s/comments' % group_id,
                                           data=data)

    def overall_result(self, snapshot):
        """Analyze the openQA jobs of a given snapshot Returns a QAResult"""

        if snapshot is None:
            return QA_FAILED

        jobs = self.find_openqa_results(snapshot)

        self.failed_relevant_jobs = []
        self.failed_ignored_jobs = []

        if len(jobs) < self.jobs_num():  # not yet scheduled
            logger.warning('we have only %s jobs' % len(jobs))
            return QA_INPROGRESS

        in_progress = False
        for job in jobs:
            # print json.dumps(job, sort_keys=True, indent=4)
            if job['result'] in ('failed', 'incomplete', 'skipped',
                                 'user_cancelled', 'obsoleted',
                                 'parallel_failed'):
                # print json.dumps(job, sort_keys=True, indent=4), jobname
                url = makeurl(
                    self.openqa_server,
                    ['api', 'v1', 'jobs',
                     str(job['id']), 'comments'])
                f = self.api.retried_GET(url)
                comments = json.load(f)
                refs = set()
                labeled = 0
                to_ignore = False
                for comment in comments:
                    for ref in comment['bugrefs']:
                        refs.add(str(ref))
                    if comment['userName'] == 'ttm' and comment[
                            'text'] == 'label:unknown_failure':
                        labeled = comment['id']
                    if re.search(r'@ttm:? ignore', comment['text']):
                        to_ignore = True
                # to_ignore can happen with or without refs
                ignored = True if to_ignore else len(refs) > 0
                build_nr = str(job['settings']['BUILD'])
                for ref in refs:
                    if ref not in self.issues_to_ignore:
                        if to_ignore:
                            self.issues_to_ignore[ref] = build_nr
                            self.update_pinned_descr = True
                        else:
                            ignored = False
                    else:
                        # update reference
                        self.issues_to_ignore[ref] = build_nr

                if ignored:
                    self.failed_ignored_jobs.append(job['id'])
                    if labeled:
                        text = 'Ignored issue' if len(
                            refs) > 0 else 'Ignored failure'
                        # remove flag - unfortunately can't delete comment unless admin
                        data = {'text': text}
                        if self.dryrun:
                            logger.info("Would label {} with: {}".format(
                                job['id'], text))
                        else:
                            self.openqa.openqa_request('PUT',
                                                       'jobs/%s/comments/%d' %
                                                       (job['id'], labeled),
                                                       data=data)

                    logger.info("job %s failed, but was ignored", job['name'])
                else:
                    self.failed_relevant_jobs.append(job['id'])
                    if not labeled and len(refs) > 0:
                        data = {'text': 'label:unknown_failure'}
                        if self.dryrun:
                            logger.info("Would label {} as unknown".format(
                                job['id']))
                        else:
                            self.openqa.openqa_request('POST',
                                                       'jobs/%s/comments' %
                                                       job['id'],
                                                       data=data)

                    joburl = '%s/tests/%s' % (self.openqa_server, job['id'])
                    logger.info("job %s failed, see %s", job['name'], joburl)

            elif job['result'] == 'passed' or job['result'] == 'softfailed':
                continue
            elif job['result'] == 'none':
                if job['state'] != 'cancelled':
                    in_progress = True
            else:
                raise Exception(job['result'])

        self.save_issues_to_ignore()

        if len(self.failed_relevant_jobs) > 0:
            return QA_FAILED

        if in_progress:
            return QA_INPROGRESS

        return QA_PASSED

    def all_repos_done(self, project, codes=None):
        """Check the build result of the project and only return True if all
        repos of that project are either published or unpublished

        """

        # coolo's experience says that 'finished' won't be
        # sufficient here, so don't try to add it :-)
        codes = ['published', 'unpublished'] if not codes else codes

        url = self.api.makeurl(['build', project, '_result'],
                               {'code': 'failed'})
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        ready = True
        for repo in root.findall('result'):
            # ignore ports. 'factory' is used by arm for repos that are not
            # meant to use the totest manager.
            if repo.get('repository') in ('ports', 'factory',
                                          'images_staging'):
                continue
            if repo.get('dirty', '') == 'true':
                logger.info('%s %s %s -> %s' %
                            (repo.get('project'), repo.get('repository'),
                             repo.get('arch'), 'dirty'))
                ready = False
            if repo.get('code') not in codes:
                logger.info('%s %s %s -> %s' %
                            (repo.get('project'), repo.get('repository'),
                             repo.get('arch'), repo.get('code')))
                ready = False
        return ready

    def maxsize_for_package(self, package):
        if re.match(r'.*-mini-.*', package):
            return 737280000  # a CD needs to match

        if re.match(r'.*-dvd5-.*', package):
            return 4700372992  # a DVD needs to match

        if re.match(r'livecd-x11', package):
            return 681574400  # not a full CD

        if re.match(r'livecd-.*', package):
            return 999999999  # a GB stick

        if re.match(r'.*-(dvd9-dvd|cd-DVD)-.*', package):
            return 8539996159

        if re.match(r'.*-ftp-(ftp|POOL)-', package):
            return None

        # docker container has no size limit
        if re.match(r'opensuse-leap-image.*', package):
            return None

        if '-Addon-NonOss-ftp-ftp' in package:
            return None

        if 'JeOS' in package:
            return 4700372992

        raise Exception('No maxsize for {}'.format(package))

    def package_ok(self, project, package, repository, arch):
        """Checks one package in a project and returns True if it's succeeded

        """

        query = {'package': package, 'repository': repository, 'arch': arch}

        url = self.api.makeurl(['build', project, '_result'], query)
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        for repo in root.findall('result'):
            status = repo.find('status')
            if status.get('code') != 'succeeded':
                logger.info(
                    '%s %s %s %s -> %s' %
                    (project, package, repository, arch, status.get('code')))
                return False

        maxsize = self.maxsize_for_package(package)
        if not maxsize:
            return True

        url = self.api.makeurl(['build', project, repository, arch, package])
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        for binary in root.findall('binary'):
            if not binary.get('filename', '').endswith('.iso'):
                continue
            isosize = int(binary.get('size', 0))
            if isosize > maxsize:
                logger.error('%s %s %s %s: %s' %
                             (project, package, repository, arch,
                              'too large by %s bytes' % (isosize - maxsize)))
                return False

        return True

    def is_snapshottable(self):
        """Check various conditions required for factory to be snapshotable

        """

        if not self.all_repos_done(self.project):
            return False

        for product in self.ftp_products + self.main_products:
            if not self.package_ok(self.project, product, self.product_repo,
                                   self.product_arch):
                return False

            if len(self.livecd_products):

                if not self.all_repos_done('%s:Live' % self.project):
                    return False

                for arch in self.livecd_archs:
                    for product in self.livecd_products:
                        if not self.package_ok('%s:Live' % self.project,
                                               product, self.livecd_repo,
                                               arch):
                            return False

        return True

    def _release_package(self, project, package, set_release=None):
        query = {'cmd': 'release'}

        if set_release:
            query['setrelease'] = set_release

        # FIXME: make configurable. openSUSE:Factory:ARM currently has multiple
        # repos with release targets, so obs needs to know which one to release
        if project == 'openSUSE:Factory:ARM':
            query['repository'] = 'images'

        baseurl = ['source', project, package]

        url = self.api.makeurl(baseurl, query=query)
        if self.dryrun or self.norelease:
            logger.info("release %s/%s (%s)" % (project, package, set_release))
        else:
            self.api.retried_POST(url)

    def _release(self, set_release=None):
        for product in self.ftp_products:
            self._release_package(self.project, product)

        for cd in self.livecd_products:
            self._release_package('%s:Live' % self.project,
                                  cd,
                                  set_release=set_release)

        for cd in self.main_products:
            self._release_package(self.project, cd, set_release=set_release)

    def update_totest(self, snapshot=None):
        release = 'Snapshot%s' % snapshot if snapshot else None
        logger.info('Updating snapshot %s' % snapshot)
        if not (self.dryrun or self.norelease):
            self.api.switch_flag_in_prj(self.test_project,
                                        flag='publish',
                                        state='disable')

        self._release(set_release=release)

    def publish_factory_totest(self):
        logger.info('Publish test project content')
        if not (self.dryrun or self.norelease):
            self.api.switch_flag_in_prj(self.test_project,
                                        flag='publish',
                                        state='enable')

    def totest_is_publishing(self):
        """Find out if the publishing flag is set in totest's _meta"""

        url = self.api.makeurl(['source', self.test_project, '_meta'])
        f = self.api.retried_GET(url)
        root = ET.parse(f).getroot()
        if not root.find('publish'):  # default true
            return True

        for flag in root.find('publish'):
            if flag.get('repository', None) or flag.get('arch', None):
                continue
            if flag.tag == 'enable':
                return True
        return False

    def totest(self):
        try:
            current_snapshot = self.get_current_snapshot()
        except NotFoundException as e:
            # nothing in test project (yet)
            logger.warn(e)
            current_snapshot = None
        new_snapshot = self.current_version()
        self.update_pinned_descr = False
        current_result = self.overall_result(current_snapshot)
        current_qa_version = self.current_qa_version()

        logger.info('current_snapshot %s: %s' %
                    (current_snapshot, self._result2str(current_result)))
        logger.debug('new_snapshot %s', new_snapshot)
        logger.debug('current_qa_version %s', current_qa_version)

        snapshotable = self.is_snapshottable()
        logger.debug("snapshotable: %s", snapshotable)
        can_release = ((current_snapshot is None
                        or current_result != QA_INPROGRESS) and snapshotable)

        # not overwriting
        if new_snapshot == current_snapshot:
            logger.debug("no change in snapshot version")
            can_release = False
        elif not self.all_repos_done(self.test_project):
            logger.debug("not all repos done, can't release")
            # the repos have to be done, otherwise we better not touch them
            # with a new release
            can_release = False

        self.send_amqp_event(current_snapshot, current_result)

        can_publish = (current_result == QA_PASSED)

        # already published
        totest_is_publishing = self.totest_is_publishing()
        if totest_is_publishing:
            logger.debug("totest already publishing")
            can_publish = False

        if self.update_pinned_descr:
            self.status_for_openqa = {
                'current_snapshot': current_snapshot,
                'new_snapshot': new_snapshot,
                'snapshotable': snapshotable,
                'can_release': can_release,
                'is_publishing': totest_is_publishing,
            }
            self.update_openqa_status_message()

        if can_publish:
            if current_qa_version == current_snapshot:
                self.publish_factory_totest()
                self.write_version_to_dashboard("snapshot", current_snapshot)
                can_release = False  # we have to wait
            else:
                # We reached a very bad status: openQA testing is 'done', but not of the same version
                # currently in test project. This can happen when 'releasing' the
                # product failed
                raise Exception(
                    "Publishing stopped: tested version (%s) does not match version in test project (%s)"
                    % (current_qa_version, current_snapshot))

        if can_release:
            self.update_totest(new_snapshot)
            self.write_version_to_dashboard("totest", new_snapshot)

    def send_amqp_event(self, current_snapshot, current_result):
        if not self.amqp_url:
            logger.debug(
                'No ttm_amqp_url configured in oscrc - skipping amqp event emission'
            )
            return

        logger.debug('Sending AMQP message')
        inf = re.sub(r"ed$", '', self._result2str(current_result))
        msg_topic = '%s.ttm.build.%s' % (self.project_base.lower(), inf)
        msg_body = json.dumps({
            'build': current_snapshot,
            'project': self.project,
            'failed_jobs': {
                'relevant': self.failed_relevant_jobs,
                'ignored': self.failed_ignored_jobs,
            }
        })

        # send amqp event
        tries = 7  # arbitrary
        for t in range(tries):
            try:
                notify_connection = pika.BlockingConnection(
                    pika.URLParameters(self.amqp_url))
                notify_channel = notify_connection.channel()
                notify_channel.exchange_declare(exchange='pubsub',
                                                exchange_type='topic',
                                                passive=True,
                                                durable=True)
                notify_channel.basic_publish(exchange='pubsub',
                                             routing_key=msg_topic,
                                             body=msg_body)
                notify_connection.close()
                break
            except pika.exceptions.ConnectionClosed as e:
                logger.warn(
                    'Sending AMQP event did not work: %s. Retrying try %s out of %s'
                    % (e, t, tries))
        else:
            logger.error(
                'Could not send out AMQP event for %s tries, aborting.' %
                tries)

    def release(self):
        new_snapshot = self.current_version()
        self.update_totest(new_snapshot)

    def write_version_to_dashboard(self, target, version):
        if not (self.dryrun or self.norelease):
            self.api.pseudometa_file_ensure('version_%s' % target,
                                            version,
                                            comment='Update version')
Esempio n. 7
0
class StagingHelper(object):
    def __init__(self, project):
        self.project = project
        self.apiurl = osc.conf.config['apiurl']
        Config(self.apiurl, self.project)
        self.api = StagingAPI(self.apiurl, self.project)

    def get_support_package_list(self, project, repository):
        f = osc.core.get_buildconfig(self.apiurl, project, repository).splitlines()
        pkg_list = []
        for line in f:
            if re.match('Preinstall', line) or re.match('VM[Ii]nstall', line) or re.match('Support', line):
                content = line.split(':')
                variables = [x.strip() for x in content[1].split(' ')]
                for var in variables:
                    if var != '' and var not in pkg_list:
                        if var.startswith('!') and var[1:] in pkg_list:
                            pkg_list.remove(var[1:])
                        else:
                            pkg_list.append(var)
        return pkg_list

    def get_project_binarylist(self, project, repository, arch):
        query = {'view': 'binaryversions'}
        root = ET.parse(http_GET(makeurl(self.apiurl, ['build', project, repository, arch],
            query=query))).getroot()
        return root

    def process_project_binarylist(self, project, repository, arch):
        prj_binarylist = self.get_project_binarylist(project, repository, arch)
        files = {}
        for package in prj_binarylist.findall('./binaryversionlist'):
            for binary in package.findall('binary'):
                result = re.match(r'(.*)-([^-]*)-([^-]*)\.([^-\.]+)\.rpm', binary.attrib['name'])
                if not result:
                    continue
                bname = result.group(1)
                if bname.endswith('-debuginfo') or bname.endswith('-debuginfo-32bit'):
                    continue
                if bname.endswith('-debugsource'):
                    continue
                if bname.startswith('::import::'):
                    continue
                if result.group(4) == 'src':
                    continue
                files[bname] = package.attrib['package'].split(':', 1)[0]

        return files

    def check_multiple_specs(self, project, packages):
        expanded_packages = []

        for pkg in packages:
            query = {'expand': 1}
            url = makeurl(self.apiurl, ['source', project, pkg], query=query)
            try:
                root = ET.parse(http_GET(url)).getroot()
            except HTTPError as e:
                if e.code == 404:
                    continue
                raise
            for en in root.findall('entry'):
                if en.attrib['name'].endswith('.spec'):
                    expanded_packages.append(en.attrib['name'][:-5])

        return expanded_packages

    def crawl(self):
        """Main method"""
        rebuild_data = self.api.pseudometa_file_load('support_pkg_rebuild')
        if rebuild_data is None:
            print("There is no support_pkg_rebuild file!")
            return

        logging.info('Gathering support package list from %s' % self.project)
        support_pkgs = self.get_support_package_list(self.project, 'standard')
        files = self.process_project_binarylist(self.project, 'standard', 'x86_64')
        staging_projects = ["%s:%s"%(self.api.cstaging, p) for p in self.api.get_staging_projects_short()]
        cand_sources = defaultdict(list)
        for stg in staging_projects:
            prj_meta = self.api.get_prj_pseudometa(stg)
            prj_staged_packages = [req['package'] for req in prj_meta['requests']]
            prj_expanded_packages = self.check_multiple_specs(self.project, prj_staged_packages)
            for pkg in support_pkgs:
                if files.get(pkg) and files.get(pkg) in prj_expanded_packages:
                    if files.get(pkg) not in cand_sources[stg]:
                        cand_sources[stg].append(files.get(pkg))

        root = ET.fromstring(rebuild_data)

        logging.info('Checking rebuild data...')

        for stg in root.findall('staging'):
            rebuild = stg.find('rebuild').text
            suppkg_list = stg.find('supportpkg').text
            need_rebuild = False
            suppkgs = []
            if suppkg_list:
                suppkgs = suppkg_list.split(',')

            stgname =  stg.get('name')
            if len(cand_sources[stgname]) and rebuild == 'unknown':
                need_rebuild = True
                stg.find('rebuild').text = 'needed'
                new_suppkg_list = ','.join(cand_sources[stgname])
                stg.find('supportpkg').text = new_suppkg_list
            elif len(cand_sources[stgname]) and rebuild != 'unknown':
                for cand in cand_sources[stgname]:
                    if cand not in suppkgs:
                        need_rebuild = True
                        stg.find('rebuild').text = 'needed'
                        break
                new_suppkg_list = ','.join(cand_sources[stgname])
                stg.find('supportpkg').text = new_suppkg_list
            elif not len(cand_sources[stgname]):
                stg.find('rebuild').text = 'unneeded'
                stg.find('supportpkg').text = ''

            if stg.find('rebuild').text == 'needed':
                need_rebuild = True

            if need_rebuild and not self.api.is_repo_dirty(stgname, 'standard'):
                logging.info('Rebuild %s' % stgname)
                osc.core.rebuild(self.apiurl, stgname, None, None, None)
                stg.find('rebuild').text = 'unneeded'

        logging.info('Updating support pkg list...')
        rebuild_data_updated = ET.tostring(root)
        logging.debug(rebuild_data_updated)
        if rebuild_data_updated != rebuild_data:
            self.api.pseudometa_file_save(
                'support_pkg_rebuild', rebuild_data_updated, 'support package rebuild')