Results

This module contains everything involved in parsing and evaluating the results of test runs. This includes the base for the ‘result parser’ plugins themselves, as well as functions for performing this parsing. Additionally, it contains the functions used to get the base result values, as well as resolving result evaluations.

pavilion.result.check_config(parser_conf, evaluate_conf)

Make sure the result config is sensible, both for result parsers and evaluations.

For result parsers we check for:

  • Duplicated key names.

  • Reserved key names.

  • Bad parser plugin arguments.

For evaluations we check for: - Reserved key names. - Invalid expression syntax.

Raises

ResultError – When a config breaks the rules.

pavilion.result.prune_result_log(log_path: Path, ids: List[str]) List[dict]

Remove records corresponding to the given test ids. Ids can be either an test run id or a test run uuid.

Parameters
  • log_path – The result log path.

  • ids – A list of test run ids and/or uuids.

Returns

A list of the pruned result dictionaries.

Raises

ResultError – When we can’t overwrite the log file.

pavilion.result.remove_temp_results(results: dict, log: IndentedLog) None

Remove all result keys that start with an underscore.

Base Results

Handles getting the default, base results.

pavilion.result.base.BASE_RESULTS = {'created': (<function <lambda>>, 'When the test was created.'), 'duration': (<function <lambda>>, 'Duration of the test run (finished - started) in seconds.'), 'finished': (<function <lambda>>, 'When the test run finished.'), 'id': (<function <lambda>>, 'The test run id'), 'job_info': (<function <lambda>>, "The scheduler plugin's job info for the test."), 'name': (<function <lambda>>, 'The test run name'), 'pav_result_errors': (<function <lambda>>, 'Errors from processing results.'), 'pav_version': (<function <lambda>>, 'The version of Pavilion used to run this test.'), 'per_file': (<function <lambda>>, 'Per file results.'), 'permute_on': (<function <lambda>>, 'The permutation variables and values.'), 'return_value': (None, 'The return value of run.sh'), 'sched': (<function <lambda>>, 'Most of the scheduler variables.'), 'started': (<function <lambda>>, 'When the test run itself started.'), 'sys_name': (<function <lambda>>, "The system name '{{sys.sys_name}}'"), 'test_version': (<function <lambda>>, 'The test config version.'), 'user': (<function <lambda>>, 'The user that started the test.'), 'uuid': (<function <lambda>>, "The test's fully unique identifier."), 'var': (<function <lambda>>, "The test's variables.")}

A dictionary of result key names and a tuple of the function to acquire the value and a documentation string. The function should take a test_run object as it’s only argument. If the function is None, that denotes that this key is reserved, but filled in elsewhere.

pavilion.result.base.base_results(test) dict

Get all of the auto-filled result values for a test. :param pavilion.test_run.PavTestRun test: A pavilion test object. :return: A dictionary of result values.

pavilion.result.base.get_top_keys(test, topkey: str) dict

Return the topkey, e.g. sched, var, nested dict from the test object variable manager. Keys whose name ends in ‘list’ will always be a list, otherwise they’ll be single items. Keys in DISABLE_SCHED_KEYS won’t be added.

Parameters
  • test – Test object.

  • topkey – Key with dict type value in test object.

Result Parsers

Built in Result Parser plugins live here. While they’re plugins, they’re added manually for speed.

class pavilion.result_parsers.ResultParser(name, description, defaults=None, config_elems=None, validators=None, priority=10)

Bases: IPlugin

Base class for creating a result parser plugin. These are essentially a callable that implements operations on the test or test files. The arguments for the callable are provided automatically via the test config. The doc string of result parser classes is used as the user help text for that class, along with the help from the config items.

The parser itself should be implemented by overriding the __call__ method. It should take a file argument and kwargs to match the config arguments.

FORCE_DEFAULTS = []

Let the user know they can’t set these config keys for this result parser, effectively forcing the value to the default.

GLOBAL_CONFIG_ELEMS = [<yaml_config StrElem action>, <yaml_config ListElem files>, <yaml_config StrElem per_file>, <yaml_config StrElem for_lines_matching>, <yaml_config ListElem preceded_by>, <yaml_config StrElem match_select>]
PRIO_COMMON = 10
PRIO_CORE = 0
PRIO_USER = 20
_DEFAULTS = {'action': 'store', 'files': ['../run.log'], 'for_lines_matching': '', 'match_select': 'first', 'per_file': 'first', 'preceded_by': []}

Defaults for the common parser arguments. This in not meant to be changed by subclasses.

__call__(*args, **kwargs)

Override with the result parser function.

__init__(name, description, defaults=None, config_elems=None, validators=None, priority=10)

Initialize the plugin object

Parameters
  • name (str) – The name of this plugin.

  • description (str) – A short description of this result parser.

  • config_elems (List[yaml_config.ConfigElement]) – A list of configuration elements (from the yaml_config library) to use to define the config section for this result parser. These will be passed as arguments to the parser function. Only StrElem and ListElems are accepted. Any type conversions should be done with validators. Use the defaults and validators argument to set defaults, rather than the YamlConfig options.

  • defaults (dict) – A dictionary of defaults for the result parser’s arguments.

  • validators (dict) –

    A dictionary of auto-validators. These can take several forms:

    • A tuple - The value must be one of the items in the tuple.

    • a function - The function should accept a single argument. The returned value is used. ValueError or ResultError should be raised if there are issues. Typically this will be a type conversion function. For list arguments this is applied to each of the list values.

  • priority (int) – The priority of this plugin, compared to plugins of the same name. Higher priority plugins will supersede others.

__module__ = 'pavilion.result_parsers.base_classes'
_check_args(**kwargs) dict

Override this to add custom checking of the arguments at test kickoff time. This prevents errors in your arguments from causing a problem in the middle of a test run. The yaml_config module handles structural checking (and can handle more). This should raise a descriptive ResultParserError if any issues are found.

Parameters

kwargs – Child result parsers should override these with specific kwargs for their arguments. They should all default to and rely on the config parser to set their defaults.

Raises

ResultParserError – If there are bad arguments.

activate()

Yapsy runs this when adding the plugin.

In this case it:

  • Adds the config section (from get_config_items()) to the test config format.

  • Adds the result parser to the list of known result parsers.

check_args(**kwargs) dict

Check the arguments for any errors at test kickoff time, if they don’t contain deferred variables. We can’t check tests with deferred args. On error, should raise a ResultParserError.

Parameters

kwargs (dict) – The arguments from the config.

Raises

ResultError – When bad arguments are given.

check_config(rconf: dict, keys: List[str]) None

Validate the parser configuration.

Parameters
  • rconf – The results parser configuration.

  • keys – The keys (generally one) under which the parsed results will be stored.

Raises

ResultError on failure.

deactivate()

Yapsy calls this to remove this plugin. We only ever do this in unittests.

doc()

Return documentation on this result parser.

get_config_items()

Get the config for this result parser. This should be a list of yaml_config.ConfigElement instances that will be added to the test config format at plugin activation time. The simplest format is a list of yaml_config.StrElem objects, but any structure is allowed as long as the leaf elements are StrElem type.

The config values will be passed as the keyword arguments to the result parser when it’s run and when its arguments are checked. The base implementation provides several arguments that must be present for every result parser. See the implementation of this method in result_parser.py for more info on those arguments and what they do.

Example:

config_items = super().get_config_items()
config_items.append(
    yaml_config.StrElem('token', default='PASSED',
        help="The token to search for in the file."
)
return config_items
property path

The path to the file containing this result parser plugin.

register_core_plugins()

Add all builtin plugins and activate them.

set_parser_defaults(rconf: dict, def_conf: dict)

Set the default values for each result parser. The default conf can hold defaults that apply across an entire result parser.

pavilion.result_parsers.register_core_plugins()

Add all builtin plugins and activate them.

Result Evaluation

Handles performing evaluations on results.

pavilion.result.evaluations.check_evaluations(evaluations: Dict[str, str])

Check all evaluations for basic errors.

Raises

ResultError – For detected problems.

pavilion.result.evaluations.evaluate_results(results: dict, evaluations: Dict[str, str], base_log: Optional[IndentedLog] = None)

Perform result evaluations using an expression parser. The variables in such expressions are pulled from the results data structure, and the results are stored there too. :param results: The result dict. Will be modified in place. :param evaluations: A dictionary of evals to perform. :param base_log: The optional logger function from (result.get_result_logger) :return:

pavilion.result.evaluations.parse_evaluation_dict(eval_dict: Dict[str, str], results: dict, log: IndentedLog) None

Parse the dictionary of evaluation expressions, given that some of them may contain references to each other. Each evaluated value will be stored under its corresponding key in the results dict.

Raises
  • StringParserError – When there’s an error parsing or resolving one of the expressions. The error will already contain key information.

  • ValueError – When there’s a reference loop.