Esempio n. 1
0
class Spec2000(Workload):

    name = 'spec2000'
    description = """
    SPEC2000 benchmarks measuring processor, memory and compiler.

    http://www.spec.org/cpu2000/

    From the web site:

    SPEC CPU2000 is the next-generation industry-standardized CPU-intensive benchmark suite. SPEC
    designed CPU2000 to provide a comparative measure of compute intensive performance across the
    widest practical range of hardware. The implementation resulted in source code benchmarks
    developed from real user applications. These benchmarks measure the performance of the
    processor, memory and compiler on the tested system.

    .. note:: At the moment, this workload relies on pre-built SPEC binaries (included in an
              asset bundle). These binaries *must* be built according to rules outlined here::

                  http://www.spec.org/cpu2000/docs/runrules.html#toc_2.0

              in order for the results to be valid SPEC2000 results.

    .. note:: This workload does not attempt to generate results in an admissible SPEC format. No
              metadata is provided (though some, but not all, of the required metdata is colleted
              by WA elsewhere). It is upto the user to post-process results to generated
              SPEC-admissible results file, if that is their intention.

    *base vs peak*

    SPEC2000 defines two build/test configuration: base and peak. Base is supposed to use basic
    configuration (e.g. default compiler flags) with no tuning, and peak is specifically optimized for
    a system. Since this workload uses externally-built binaries, there is no way for WA to be sure
    what configuration is used -- the user is expected to keep track of that. Be aware that
    base/peak also come with specfic requirements for the way workloads are run (e.g. how many instances
    on multi-core systems)::

        http://www.spec.org/cpu2000/docs/runrules.html#toc_3

    These are not enforced by WA, so it is again up to the user to ensure that correct workload
    parameters are specfied inthe agenda, if they intend to collect "official" SPEC results. (Those
    interested in collecting official SPEC results should also note that setting runtime parameters
    would violate SPEC runs rules that state that no configuration must be done to the platform
    after boot).

    *bundle structure*

    This workload expects the actual benchmark binaries to be provided in a tarball "bundle" that has
    a very specific structure. At the top level of the tarball, there should be two directories: "fp"
    and "int" -- for each of the SPEC2000 categories. Under those, there is a sub-directory per benchmark.
    Each benchmark sub-directory contains three sub-sub-directorie:

    - "cpus" contains a subdirector for each supported cpu (e.g. a15) with a single executable binary
      for that cpu, in addition to a "generic" subdirectory that has not been optimized for a specific
      cpu and should run on any ARM system.
    - "data" contains all additional files (input, configuration, etc) that  the benchmark executable
      relies on.
    - "scripts" contains one or more one-liner shell scripts that invoke the benchmark binary with
      appropriate command line parameters. The name of the script must be in the format
      <benchmark name>[.<variant name>].sh, i.e. name of benchmark, optionally followed by variant
      name, followed by ".sh" extension. If there is more than one script, then all of them must
      have  a variant; if there is only one script the it should not cotain a variant.

    A typical bundle may look like this::

        |- fp
        |  |-- ammp
        |  |   |-- cpus
        |  |   |   |-- generic
        |  |   |   |   |-- ammp
        |  |   |   |-- a15
        |  |   |   |   |-- ammp
        |  |   |   |-- a7
        |  |   |   |   |-- ammp
        |  |   |-- data
        |  |   |   |-- ammp.in
        |  |   |-- scripts
        |  |   |   |-- ammp.sh
        |  |-- applu
        .  .   .
        .  .   .
        .  .   .
        |- int
        .

    """

    # TODO: This is a bit of a hack. Need to re-think summary metric indication
    #      (also more than just summary/non-summary classification?)
    class _SPECSummaryMetrics(object):
        def __contains__(self, item):
            return item.endswith('_real')

    asset_file = 'spec2000-assets.tar.gz'

    aliases = [
        Alias('spec2k'),
    ]

    summary_metrics = _SPECSummaryMetrics()

    parameters = [
        Parameter('benchmarks',
                  kind=list_or_string,
                  description='Specfiles the SPEC benchmarks to run.'),
        Parameter(
            'mode',
            kind=str,
            allowed_values=['speed', 'rate'],
            default='speed',
            description=
            'SPEC benchmarks can report either speed to execute or throughput/rate. '
            'In the latter case, several "threads" will be spawned.'),
        Parameter(
            'number_of_threads',
            kind=int,
            default=None,
            description=
            'Specify the number of "threads" to be used in \'rate\' mode. (Note: '
            'on big.LITTLE systems this is the number of threads, for *each cluster*). '
        ),
        Parameter(
            'force_extract_assets',
            kind=boolean,
            default=False,
            description=
            'if set to ``True``, will extract assets from the bundle, even if they are '
            'already extracted. Note: this option implies ``force_push_assets``.'
        ),
        Parameter(
            'force_push_assets',
            kind=boolean,
            default=False,
            description=
            'If set to ``True``, assets will be pushed to device even if they\'re already '
            'present.'),
        Parameter(
            'timeout',
            kind=int,
            default=20 * 60,
            description=
            'Timemout, in seconds, for the execution of single spec test.'),
    ]

    speed_run_template = 'cd {datadir}; time ({launch_command})'
    rate_run_template = 'cd {datadir}; time ({loop}; wait)'
    loop_template = 'for i in $(busybox seq 1 {threads}); do {launch_command} 1>/dev/null 2>&1 & done'
    launch_template = 'busybox taskset {cpumask} {command} 1>/dev/null 2>&1'

    timing_regex = re.compile(
        r'(?P<minutes>\d+)m(?P<seconds>[\d.]+)s\s+(?P<category>\w+)')

    def init_resources(self, context):
        self._load_spec_benchmarks(context)

    def setup(self, context):
        cpus = self.device.core_names
        if not cpus:
            raise WorkloadError(
                'Device has not specifed CPU cores configruation.')
        cpumap = defaultdict(list)
        for i, cpu in enumerate(cpus):
            cpumap[cpu.lower()].append(i)
        for benchspec in self.benchmarks:
            commandspecs = self._verify_and_deploy_benchmark(benchspec, cpumap)
            self._build_command(benchspec, commandspecs)

    def run(self, context):
        for name, command in self.commands:
            self.timings[name] = self.device.execute(command,
                                                     timeout=self.timeout)

    def update_result(self, context):
        for benchmark, output in self.timings.iteritems():
            matches = self.timing_regex.finditer(output)
            found = False
            for match in matches:
                category = match.group('category')
                mins = float(match.group('minutes'))
                secs = float(match.group('seconds'))
                total = secs + 60 * mins
                context.result.add_metric('_'.join([benchmark, category]),
                                          total,
                                          'seconds',
                                          lower_is_better=True)
                found = True
            if not found:
                self.logger.error(
                    'Could not get timings for {}'.format(benchmark))

    def validate(self):
        if self.force_extract_assets:
            self.force_push_assets = True
        if self.benchmarks is None:  # pylint: disable=access-member-before-definition
            self.benchmarks = ['all']
        for benchname in self.benchmarks:
            if benchname == 'all':
                self.benchmarks = self.loaded_benchmarks.keys()
                break
            if benchname not in self.loaded_benchmarks:
                raise ConfigError(
                    'Unknown SPEC benchmark: {}'.format(benchname))
        if self.mode == 'speed':
            if self.number_of_threads is not None:
                raise ConfigError(
                    'number_of_threads cannot be specified in speed mode.')
        else:
            raise ValueError('Unexpected SPEC2000 mode: {}'.format(
                self.mode))  # Should never get here
        self.commands = []
        self.timings = {}

    def _load_spec_benchmarks(self, context):
        self.loaded_benchmarks = {}
        self.categories = set()
        if self.force_extract_assets or len(
                os.listdir(self.dependencies_directory)) < 2:
            bundle = context.resolver.get(ExtensionAsset(
                self, self.asset_file))
            with tarfile.open(bundle, 'r:gz') as tf:
                tf.extractall(self.dependencies_directory)
        for entry in os.listdir(self.dependencies_directory):
            entrypath = os.path.join(self.dependencies_directory, entry)
            if os.path.isdir(entrypath):
                for bench in os.listdir(entrypath):
                    self.categories.add(entry)
                    benchpath = os.path.join(entrypath, bench)
                    self._load_benchmark(benchpath, entry)

    def _load_benchmark(self, path, category):
        datafiles = []
        cpus = []
        for df in os.listdir(os.path.join(path, 'data')):
            datafiles.append(os.path.join(path, 'data', df))
        for cpu in os.listdir(os.path.join(path, 'cpus')):
            cpus.append(cpu)
        commandsdir = os.path.join(path, 'commands')
        for command in os.listdir(commandsdir):
            bench = SpecBenchmark()
            bench.name = os.path.splitext(command)[0]
            bench.path = path
            bench.category = category
            bench.datafiles = datafiles
            bench.cpus = cpus
            with open(os.path.join(commandsdir, command)) as fh:
                bench.command_template = string.Template(fh.read().strip())
            self.loaded_benchmarks[bench.name] = bench

    def _verify_and_deploy_benchmark(self, benchspec, cpumap):  # pylint: disable=R0914
        """Verifies that the supplied benchmark spec is valid and deploys the required assets
        to the device (if necessary). Returns a list of command specs (one for each CPU cluster)
        that can then be used to construct the final command."""
        bench = self.loaded_benchmarks[benchspec]
        basename = benchspec.split('.')[0]
        datadir = self.device.path.join(self.device.working_directory,
                                        self.name, basename)
        if self.force_push_assets or not self.device.file_exists(datadir):
            self.device.execute('mkdir -p {}'.format(datadir))
            for datafile in bench.datafiles:
                self.device.push_file(
                    datafile,
                    self.device.path.join(datadir, os.path.basename(datafile)))

        if self.mode == 'speed':
            cpus = [self._get_fastest_cpu().lower()]
        else:
            cpus = cpumap.keys()

        cmdspecs = []
        for cpu in cpus:
            try:
                host_bin_file = bench.get_binary(cpu)
            except ValueError, e:
                try:
                    msg = e.message
                    msg += ' Attempting to use generic binary instead.'
                    self.logger.debug(msg)
                    host_bin_file = bench.get_binary('generic')
                    cpu = 'generic'
                except ValueError, e:
                    raise ConfigError(e.message)  # re-raising as user error
            binname = os.path.basename(host_bin_file)
            binary = self.device.install(host_bin_file,
                                         with_name='.'.join([binname, cpu]))
            commandspec = CommandSpec()
            commandspec.command = bench.command_template.substitute(
                {'binary': binary})
            commandspec.datadir = datadir
            commandspec.cpumask = get_cpu_mask(cpumap[cpu])
            cmdspecs.append(commandspec)
class GlbCorp(ApkWorkload):

    name = 'glb_corporate'
    description = """
    GFXBench GL (a.k.a. GLBench) v3.0 Corporate version.

    This is a version of GLBench available through a corporate license (distinct
    from the version available in Google Play store).

    """
    package = 'net.kishonti.gfxbench'
    activity = 'net.kishonti.benchui.TestActivity'

    result_start_regex = None
    preamble_regex = None

    valid_test_ids = [
        'gl_alu',
        'gl_alu_off',
        'gl_blending',
        'gl_blending_off',
        'gl_driver',
        'gl_driver_off',
        'gl_fill',
        'gl_fill_off',
        'gl_manhattan',
        'gl_manhattan_off',
        'gl_trex',
        'gl_trex_battery',
        'gl_trex_off',
        'gl_trex_qmatch',
        'gl_trex_qmatch_highp',
    ]

    supported_resolutions = {
        '720p': {
            '-ei -w': 1280,
            '-ei -h': 720,
        },
        '1080p': {
            '-ei -w': 1920,
            '-ei -h': 1080,
        }
    }

    parameters = [
        Parameter(
            'times',
            kind=int,
            default=1,
            constraint=lambda x: x > 0,
            description=
            ('Specifies the number of times the benchmark will be run in a "tight '
             'loop", i.e. without performaing setup/teardown inbetween.')),
        Parameter(
            'resolution',
            default=None,
            allowed_values=['720p', '1080p', '720', '1080'],
            description=
            ('Explicitly specifies the resultion under which the benchmark will '
             'be run. If not specfied, device\'s native resoution will used.'
             )),
        Parameter('test_id',
                  default='gl_manhattan_off',
                  allowed_values=valid_test_ids,
                  description='ID of the GFXBench test to be run.'),
        Parameter('run_timeout',
                  kind=int,
                  default=10 * 60,
                  description="""
                  Time out for workload execution. The workload will be killed if it hasn't completed
                  withint this period.
                  """),
    ]

    aliases = [
        Alias('manhattan', test_id='gl_manhattan'),
        Alias('manhattan_off', test_id='gl_manhattan_off'),
        Alias('manhattan_offscreen', test_id='gl_manhattan_off'),
    ]

    def setup(self, context):
        super(GlbCorp, self).setup(context)
        self.command = self._build_command()
        self.monitor = GlbRunMonitor(self.device)
        self.monitor.start()

    def launch_package(self):
        # Unlike with most other APK workloads, we're invoking the use case
        # directly by starting the activity with appropriate parameters on the
        # command line during execution, so we dont' need to start activity
        # during setup.
        pass

    def run(self, context):
        for _ in xrange(self.times):
            result = self.device.execute(self.command,
                                         timeout=self.run_timeout)
            if 'FAILURE' in result:
                raise WorkloadError(result)
            else:
                self.logger.debug(result)
            time.sleep(DELAY)
            self.monitor.wait_for_run_end(self.run_timeout)

    def update_result(self, context):  # NOQA
        super(GlbCorp, self).update_result(context)
        self.monitor.stop()
        iteration = 0
        results = []
        with open(self.logcat_log) as fh:
            try:
                line = fh.next()
                result_lines = []
                while True:
                    if OLD_RESULT_START_REGEX.search(line):
                        self.preamble_regex = OLD_PREAMBLE_REGEX
                        self.result_start_regex = OLD_RESULT_START_REGEX
                    elif NEW_RESULT_START_REGEX.search(line):
                        self.preamble_regex = NEW_PREAMBLE_REGEX
                        self.result_start_regex = NEW_RESULT_START_REGEX

                    if self.result_start_regex and self.result_start_regex.search(
                            line):
                        result_lines.append('{')
                        line = fh.next()
                        while self.preamble_regex.search(line):
                            result_lines.append(
                                self.preamble_regex.sub('', line))
                            line = fh.next()
                        try:
                            result = json.loads(''.join(result_lines))
                            results.append(result)
                            if iteration:
                                suffix = '_{}'.format(iteration)
                            else:
                                suffix = ''
                            for sub_result in result['results']:
                                frames = sub_result['score']
                                elapsed_time = sub_result['elapsed_time'] / 1000
                                fps = frames / elapsed_time
                                context.result.add_metric(
                                    'score' + suffix, frames, 'frames')
                                context.result.add_metric('fps' + suffix, fps)
                        except ValueError:
                            self.logger.warning(
                                'Could not parse result for iteration {}'.
                                format(iteration))
                        result_lines = []
                        iteration += 1
                    line = fh.next()
            except StopIteration:
                pass  # EOF
        if results:
            outfile = os.path.join(context.output_directory,
                                   'glb-results.json')
            with open(outfile, 'wb') as wfh:
                json.dump(results, wfh, indent=4)

    def _build_command(self):
        command_params = []
        command_params.append('-e test_ids "{}"'.format(self.test_id))
        if self.resolution:
            if not self.resolution.endswith('p'):
                self.resolution += 'p'
            for k, v in self.supported_resolutions[
                    self.resolution].iteritems():
                command_params.append('{} {}'.format(k, v))
        return 'am start -W -S -n {}/{} {}'.format(self.package, self.activity,
                                                   ' '.join(command_params))
Esempio n. 3
0
class BBench(Workload):

    name = 'bbench'
    description = """
    BBench workload opens the built-in browser and navigates to, and
    scrolls through, some preloaded web pages and ends the workload by trying to
    connect to a local server it runs after it starts. It can also play the
    workload while it plays an audio file in the background.

    """

    summary_metrics = ['Mean Latency']

    parameters = [
        Parameter(
            'with_audio',
            kind=boolean,
            default=False,
            description=
            ('Specifies whether an MP3 should be played in the background during '
             'workload execution.')),
        Parameter(
            'server_timeout',
            kind=int,
            default=300,
            description=
            'Specifies the timeout (in seconds) before the server is stopped.'
        ),
        Parameter(
            'force_dependency_push',
            kind=boolean,
            default=False,
            description=
            ('Specifies whether to push dependency files to the device to the device '
             'if they are already on it.')),
        Parameter(
            'audio_file',
            default=os.path.join(settings.dependencies_directory,
                                 'Canon_in_D_Piano.mp3'),
            description=
            ('The (on-host) path to the audio file to be played. This is only used if '
             '``with_audio`` is ``True``.')),
        Parameter(
            'perform_cleanup',
            kind=boolean,
            default=False,
            description=
            'If ``True``, workload files on the device will be deleted after execution.'
        ),
        Parameter(
            'clear_file_cache',
            kind=boolean,
            default=True,
            description=
            'Clear the the file cache on the target device prior to running the workload.'
        ),
        Parameter('browser_package',
                  default='com.android.browser',
                  description=
                  'Specifies the package name of the device\'s browser app.'),
        Parameter(
            'browser_activity',
            default='.BrowserActivity',
            description=
            'Specifies the startup activity  name of the device\'s browser app.'
        ),
    ]

    aliases = [
        Alias('bbench_with_audio', with_audio=True),
    ]

    supported_platforms = ['android']

    def setup(self, context):  # NOQA
        self.bbench_on_device = '/'.join(
            [self.device.working_directory, 'bbench'])
        self.bbench_server_on_device = os.path.join(
            self.device.working_directory, BBENCH_SERVER_NAME)
        self.audio_on_device = os.path.join(self.device.working_directory,
                                            DEFAULT_AUDIO_FILE_NAME)
        self.index_noinput = 'file:///{}'.format(
            self.bbench_on_device) + '/index_noinput.html'

        if not os.path.isdir(os.path.join(self.dependencies_directory,
                                          "sites")):
            self._download_bbench_file()
        if self.with_audio and not os.path.isfile(self.audio_file):
            self._download_audio_file()

        if not os.path.isdir(self.dependencies_directory):
            raise ConfigError('Bbench directory does not exist: {}'.format(
                self.dependencies_directory))
        self._apply_patches()

        if self.with_audio:
            if self.force_dependency_push or not self.device.file_exists(
                    self.audio_on_device):
                self.device.push_file(self.audio_file,
                                      self.audio_on_device,
                                      timeout=120)

        # Push the bbench site pages and http server to target device
        if self.force_dependency_push or not self.device.file_exists(
                self.bbench_on_device):
            self.logger.debug('Copying bbench sites to device.')
            self.device.push_file(self.dependencies_directory,
                                  self.bbench_on_device,
                                  timeout=300)

        # Push the bbench server
        host_binary = context.resolver.get(
            Executable(self, self.device.abi, 'bbench_server'))
        device_binary = self.device.install(host_binary)
        self.luanch_server_command = '{} {}'.format(device_binary,
                                                    self.server_timeout)

        # Open the browser with default page
        self.device.execute('am start -n  {}/{} about:blank'.format(
            self.browser_package, self.browser_activity))
        time.sleep(5)

        # Stop the browser if already running and wait for it to stop
        self.device.execute('am force-stop {}'.format(self.browser_package))
        time.sleep(5)

        # Clear the logs
        self.device.clear_logcat()

        # clear browser cache
        self.device.execute('pm clear {}'.format(self.browser_package))
        if self.clear_file_cache:
            self.device.execute('sync')
            self.device.set_sysfile_value('/proc/sys/vm/drop_caches', 3)

        #On android 6+ the web browser requires permissions to access the sd card
        if self.device.get_sdk_version() >= 23:
            self.device.execute(
                "pm grant {} android.permission.READ_EXTERNAL_STORAGE".format(
                    self.browser_package))
            self.device.execute(
                "pm grant {} android.permission.WRITE_EXTERNAL_STORAGE".format(
                    self.browser_package))

        # Launch the background music
        if self.with_audio:
            self.device.execute(
                'am start -W -S -n com.android.music/.MediaPlaybackActivity -d {}'
                .format(self.audio_on_device))

    def run(self, context):
        # Launch the bbench
        self.device.execute('am start -n  {}/{} {}'.format(
            self.browser_package, self.browser_activity, self.index_noinput))
        time.sleep(5)  # WA1 parity
        # Launch the server waiting for Bbench to complete
        self.device.execute(self.luanch_server_command, self.server_timeout)

    def update_result(self, context):
        # Stop the browser
        self.device.execute('am force-stop {}'.format(self.browser_package))

        # Stop the music
        if self.with_audio:
            self.device.execute('am force-stop com.android.music')

        # Get index_no_input.html
        indexfile = os.path.join(self.device.working_directory,
                                 'bbench/index_noinput.html')
        self.device.pull_file(indexfile, context.output_directory)

        # Get the logs
        output_file = os.path.join(self.device.working_directory,
                                   'browser_bbench_logcat.txt')
        self.device.execute('logcat -v time -d > {}'.format(output_file))
        self.device.pull_file(output_file, context.output_directory)

        metrics = _parse_metrics(
            os.path.join(context.output_directory,
                         'browser_bbench_logcat.txt'),
            os.path.join(context.output_directory, 'index_noinput.html'),
            context.output_directory)
        if not metrics:
            raise WorkloadError('No BBench metrics extracted from Logcat')

        for key, values in metrics:
            for i, value in enumerate(values):
                metric = '{}_{}'.format(key, i) if i else key
                context.result.add_metric(metric,
                                          value,
                                          units='ms',
                                          lower_is_better=True)

    def teardown(self, context):
        if self.perform_cleanup:
            self.device.execute('rm -r {}'.format(self.bbench_on_device))
            self.device.execute('rm {}'.format(self.audio_on_device))

    def _download_audio_file(self):
        self.logger.debug('Downloadling audio file.')
        urllib.urlretrieve(DEFAULT_AUDIO_FILE, self.audio_file)

    def _download_bbench_file(self):
        # downloading the file to bbench_dir
        self.logger.debug('Downloading bbench dependencies.')
        full_file_path = os.path.join(self.dependencies_directory,
                                      DOWNLOADED_FILE_NAME)
        urllib.urlretrieve(DEFAULT_BBENCH_FILE, full_file_path)

        # Extracting Bbench to bbench_dir/
        self.logger.debug('Extracting bbench dependencies.')
        tar = tarfile.open(full_file_path)
        tar.extractall(os.path.dirname(self.dependencies_directory))

        # Removing not needed files and the compressed file
        os.remove(full_file_path)
        youtube_dir = os.path.join(self.dependencies_directory, 'sites',
                                   'youtube')
        os.remove(os.path.join(youtube_dir, 'www.youtube.com', 'kp.flv'))
        os.remove(os.path.join(youtube_dir, 'kp.flv'))

    def _apply_patches(self):
        self.logger.debug('Applying patches.')
        shutil.copy(os.path.join(PATCH_FILES, "bbench.js"),
                    self.dependencies_directory)
        shutil.copy(os.path.join(PATCH_FILES, "results.html"),
                    self.dependencies_directory)
        shutil.copy(os.path.join(PATCH_FILES, "index_noinput.html"),
                    self.dependencies_directory)
        shutil.copy(
            os.path.join(PATCH_FILES, "bbc.html"),
            os.path.join(self.dependencies_directory, "sites", "bbc",
                         "www.bbc.co.uk", "index.html"))
        shutil.copy(
            os.path.join(PATCH_FILES, "cnn.html"),
            os.path.join(self.dependencies_directory, "sites", "cnn",
                         "www.cnn.com", "index.html"))
        shutil.copy(
            os.path.join(PATCH_FILES, "twitter.html"),
            os.path.join(self.dependencies_directory, "sites", "twitter",
                         "twitter.com", "index.html"))
Esempio n. 4
0
class Andebench(AndroidUiAutoBenchmark):

    name = 'andebench'
    description = """
    AndEBench is an industry standard Android benchmark provided by The
    Embedded Microprocessor Benchmark Consortium (EEMBC).

    http://www.eembc.org/andebench/about.php

    From the website:

       - Initial focus on CPU and Dalvik interpreter performance
       - Internal algorithms concentrate on integer operations
       - Compares the difference between native and Java performance
       - Implements flexible multicore performance analysis
       - Results displayed in Iterations per second
       - Detailed log file for comprehensive engineering analysis

    """
    package = 'com.eembc.coremark'
    activity = 'com.eembc.coremark.splash'
    summary_metrics = ['AndEMark Java', 'AndEMark Native']

    parameters = [
        Parameter('number_of_threads', kind=int,
                  description='Number of threads that will be spawned by AndEBench.'),
        Parameter('single_threaded', kind=bool,
                  description="""
                  If ``true``, AndEBench will run with a single thread. Note: this must
                  not be specified if ``number_of_threads`` has been specified.
                  """),
        Parameter('native_only', kind=bool,
                  description="""
                  If ``true``, AndEBench will execute only the native portion of the benchmark.
                  """),
    ]

    aliases = [
        Alias('andebenchst', number_of_threads=1),
    ]

    regex = re.compile('\s*(?P<key>(AndEMark Native|AndEMark Java))\s*:'
                       '\s*(?P<value>\d+)')

    def validate(self):
        if (self.number_of_threads is not None) and (self.single_threaded is not None):  # pylint: disable=E1101
            raise ConfigError('Can\'t specify both number_of_threads and single_threaded parameters.')

    def setup(self, context):
        if self.number_of_threads is None:  # pylint: disable=access-member-before-definition
            if self.single_threaded:  # pylint: disable=E1101
                self.number_of_threads = 1  # pylint: disable=attribute-defined-outside-init
            else:
                self.number_of_threads = self.device.number_of_cores  # pylint: disable=W0201
        self.logger.debug('Using {} threads'.format(self.number_of_threads))
        self.uiauto_params['number_of_threads'] = self.number_of_threads
        self.uiauto_params['native_only'] = False
        if self.native_only:
            self.uiauto_params['native_only'] = True
        # Called after this setup as modifying uiauto_params
        super(Andebench, self).setup(context)

    def update_result(self, context):
        super(Andebench, self).update_result(context)
        results = {}
        with open(self.logcat_log) as fh:
            for line in fh:
                match = self.regex.search(line)
                if match:
                    data = match.groupdict()
                    results[data['key']] = data['value']
        for key, value in results.iteritems():
            context.result.add_metric(key, value)
class Glb(AndroidUiAutoBenchmark):

    name = 'glbenchmark'
    description = """
    Measures the graphics performance of Android devices by testing
    the underlying OpenGL (ES) implementation.

    http://gfxbench.com/about-gfxbench.jsp

    From the website:

        The benchmark includes console-quality high-level 3D animations
        (T-Rex HD and Egypt HD) and low-level graphics measurements.

        With high vertex count and complex effects such as motion blur, parallax
        mapping and particle systems, the engine of GFXBench stresses GPUs in order
        provide users a realistic feedback on their device.

    """
    activity = 'com.glbenchmark.activities.GLBenchmarkDownloaderActivity'
    view = 'com.glbenchmark.glbenchmark27/com.glbenchmark.activities.GLBRender'

    packages = {
        '2.7': 'com.glbenchmark.glbenchmark27',
        '2.5': 'com.glbenchmark.glbenchmark25',
    }
    # If usecase is not specified the default usecase is the first supported usecase alias
    # for the specified version.
    supported_usecase_aliases = {
        '2.7': ['t-rex', 'egypt'],
        '2.5': ['egypt-classic', 'egypt'],
    }

    default_iterations = 1
    install_timeout = 500

    regex = re.compile(r'GLBenchmark (metric|FPS): (.*)')

    parameters = [
        Parameter('version', default='2.7', allowed_values=['2.7', '2.5'],
                  description=('Specifies which version of the benchmark to run (different versions '
                               'support different use cases).')),
        Parameter('use_case', default=None,
                  description="""Specifies which usecase to run, as listed in the benchmark menu; e.g.
                                 ``'GLBenchmark 2.5 Egypt HD'``. For convenience, two aliases are provided
                                 for the most common use cases: ``'egypt'`` and ``'t-rex'``. These could
                                 be use instead of the full use case title. For version ``'2.7'`` it defaults
                                 to ``'t-rex'``, for version ``'2.5'`` it defaults to ``'egypt-classic'``.
                  """),
        Parameter('variant', default='onscreen',
                  description="""Specifies which variant of the use case to run, as listed in the benchmarks
                                 menu (small text underneath the use case name); e.g. ``'C24Z16 Onscreen Auto'``.
                                 For convenience, two aliases are provided for the most common variants:
                                 ``'onscreen'`` and ``'offscreen'``. These may be used instead of full variant
                                 names.
                  """),
        Parameter('times', kind=int, default=1,
                  description=('Specfies the number of times the benchmark will be run in a "tight '
                               'loop", i.e. without performaing setup/teardown inbetween.')),
        Parameter('timeout', kind=int, default=200,
                  description="""Specifies how long, in seconds, UI automation will wait for results screen to
                                 appear before assuming something went wrong.
                  """),
    ]

    aliases = [
        Alias('glbench'),
        Alias('egypt', use_case='egypt'),
        Alias('t-rex', use_case='t-rex'),
        Alias('egypt_onscreen', use_case='egypt', variant='onscreen'),
        Alias('t-rex_onscreen', use_case='t-rex', variant='onscreen'),
        Alias('egypt_offscreen', use_case='egypt', variant='offscreen'),
        Alias('t-rex_offscreen', use_case='t-rex', variant='offscreen'),
    ]

    def __init__(self, device, **kwargs):
        super(Glb, self).__init__(device, **kwargs)
        self.uiauto_params['version'] = self.version

        if self.use_case is None:
            self.use_case = self.supported_usecase_aliases[self.version][0]
        if self.use_case.lower() in USE_CASE_MAP:
            if self.use_case not in self.supported_usecase_aliases[self.version]:
                raise ConfigError('usecases {} is not supported in version {}'.format(self.use_case, self.version))
            self.use_case = USE_CASE_MAP[self.use_case.lower()]
        self.uiauto_params['use_case'] = self.use_case.replace(' ', '_')

        if self.variant.lower() in VARIANT_MAP:
            self.variant = VARIANT_MAP[self.variant.lower()]
        self.uiauto_params['variant'] = self.variant.replace(' ', '_')

        self.uiauto_params['iterations'] = self.times
        self.run_timeout = 4 * 60 * self.times

        self.uiauto_params['timeout'] = self.timeout
        self.package = self.packages[self.version]

    def init_resources(self, context):
        self.apk_file = context.resolver.get(wlauto.common.android.resources.ApkFile(self), version=self.version)
        self.uiauto_file = context.resolver.get(wlauto.common.android.resources.JarFile(self))
        self.device_uiauto_file = self.device.path.join(self.device.working_directory,
                                                        os.path.basename(self.uiauto_file))
        if not self.uiauto_package:
            self.uiauto_package = os.path.splitext(os.path.basename(self.uiauto_file))[0]

    def update_result(self, context):
        super(Glb, self).update_result(context)
        match_count = 0
        with open(self.logcat_log) as fh:
            for line in fh:
                match = self.regex.search(line)
                if match:
                    metric = match.group(1)
                    value, units = match.group(2).split()
                    value = value.replace('*', '')
                    if metric == 'metric':
                        metric = 'Frames'
                        units = 'frames'
                    metric = metric + '_' + str(match_count // 2)
                    context.result.add_metric(metric, value, units)
                    match_count += 1
Esempio n. 6
0
class VideoWorkload(Workload):
    name = 'video'
    description = """
    Plays a video file using the standard android video player for a predetermined duration.

    The video can be specified either using ``resolution`` workload parameter, in which case
    `Big Buck Bunny`_ MP4 video of that resolution will be downloaded and used, or using
    ``filename`` parameter, in which case the video file specified will be used.


    .. _Big Buck Bunny: http://www.bigbuckbunny.org/

    """
    supported_platforms = ['android']

    parameters = [
        Parameter(
            'play_duration',
            kind=int,
            default=20,
            description=
            'Playback duration of the video file. This become the duration of the workload.'
        ),
        Parameter(
            'resolution',
            default='720p',
            allowed_values=['480p', '720p', '1080p'],
            description='Specifies which resolution video file to play.'),
        Parameter('filename',
                  description="""
                   The name of the video file to play. This can be either a path
                   to the file anywhere on your file system, or it could be just a
                   name, in which case, the workload will look for it in
                   ``~/.workloads_automation/dependency/video``
                   *Note*: either resolution or filename should be specified, but not both!
                  """),
        Parameter('force_dependency_push',
                  kind=boolean,
                  default=False,
                  description="""
                  If true, video will always be pushed to device, regardless
                  of whether the file is already on the device.  Default is ``False``.
                  """),
    ]

    aliases = [
        Alias('video_720p', resolution='720p'),
        Alias('video_1080p', resolution='1080p'),
    ]

    @property
    def host_video_file(self):
        if not self._selected_file:
            if self.filename:
                if self.filename[0] in './' or len(
                        self.filename) > 1 and self.filename[1] == ':':
                    filepath = os.path.abspath(self.filename)
                else:
                    filepath = os.path.join(self.video_directory,
                                            self.filename)
                if not os.path.isfile(filepath):
                    raise WorkloadError('{} does not exist.'.format(filepath))
                self._selected_file = filepath
            else:
                files = self.video_files[self.resolution]
                if not files:
                    url = DOWNLOAD_URLS[self.resolution]
                    filepath = os.path.join(self.video_directory,
                                            os.path.basename(url))
                    self.logger.debug('Downloading {}...'.format(filepath))
                    urllib.urlretrieve(url, filepath)
                    self._selected_file = filepath
                else:
                    self._selected_file = files[0]
                    if len(files) > 1:
                        self.logger.warn(
                            'Multiple files for 720p found. Using {}.'.format(
                                self._selected_file))
                        self.logger.warn(
                            'Use \'filename\'parameter instead of \'resolution\' to specify a different file.'
                        )
        return self._selected_file

    def init_resources(self, context):
        self.video_directory = _d(
            os.path.join(settings.dependencies_directory, 'video'))
        self.video_files = defaultdict(list)
        self.enum_video_files()
        self._selected_file = None

    def setup(self, context):
        on_device_video_file = os.path.join(
            self.device.working_directory,
            os.path.basename(self.host_video_file))
        if self.force_dependency_push or not self.device.file_exists(
                on_device_video_file):
            self.logger.debug('Copying {} to device.'.format(
                self.host_video_file))
            self.device.push_file(self.host_video_file,
                                  on_device_video_file,
                                  timeout=120)
        self.device.execute(
            'am start -n  com.android.browser/.BrowserActivity about:blank')
        time.sleep(5)
        self.device.execute('am force-stop com.android.browser')
        time.sleep(5)
        self.device.clear_logcat()
        command = 'am start -W -S -n com.android.gallery3d/.app.MovieActivity -d {}'.format(
            on_device_video_file)
        self.device.execute(command)

    def run(self, context):
        time.sleep(self.play_duration)

    def update_result(self, context):
        self.device.execute('am force-stop com.android.gallery3d')

    def teardown(self, context):
        pass

    def validate(self):
        if (self.resolution and self.filename) and (
                self.resolution != self.parameters['resolution'].default):
            raise ConfigError(
                'Ether resolution *or* filename must be specified; but not both.'
            )

    def enum_video_files(self):
        for filename in os.listdir(self.video_directory):
            for resolution in self.parameters['resolution'].allowed_values:
                if resolution in filename:
                    self.video_files[resolution].append(
                        os.path.join(self.video_directory, filename))