Project Definition

Most of the project definition happens in a project folder in the checked out copy of the resstock or comstock repo. However, for this library to work, a separate project YAML file provides the details needed for the batch run. An example file is in this repo as project_resstock_national.yml as shown below.

schema_version: '0.3'
buildstock_directory: ../resstock  # Relative to this file or absolute
project_directory: project_national  # Relative to buildstock_directory
output_directory: /scratch/nmerket/national_test_outputs2
# weather_files_url: https://data.nrel.gov/system/files/156/BuildStock_TMY3_FIPS.zip
weather_files_path: /shared-projects/buildstock/weather/BuildStock_TMY3_FIPS.zip  # Relative to this file or absolute path to zipped weather files

sampler:
  type: residential_quota_downselect
  args:
    n_datapoints: 350000
    logic:
      - Geometry Building Type RECS|Single-Family Detached
      - Vacancy Status|Occupied
    resample: false

workflow_generator:
  type: residential_default
  args:
    timeseries_csv_export:
      reporting_frequency: Hourly
      include_enduse_subcategories: true

baseline:
  n_buildings_represented: 133172057  # Total number of residential dwelling units in contiguous United States, including unoccupied units, resulting from a census tract level query of ACS 5-yr 2016 (i.e. 2012-2016), using this script: https://github.com/NREL/resstock-estimation/blob/master/sources/spatial/tsv_maker.py.

upgrades:
  - upgrade_name: Triple-Pane Windows
    options:
      - option: Windows|Low-E, Triple, Non-metal, Air, L-Gain
#        apply_logic:
        costs:
          - value: 45.77
            multiplier: Window Area (ft^2)
        lifetime: 30

eagle:
  n_jobs: 4
  minutes_per_sim: 4
  account: enduse
  sampling:
    time: 5
  postprocessing:
    time: 10
    n_workers: 1

aws:
  # The job_identifier should be unique, start with alpha, not include dashes, and limited to 10 chars or data loss can occur
  job_identifier: test_proj
  s3:
    bucket: resbldg-datasets
    prefix: testing/user_test
  emr:
    worker_instance_count: 1
  region: us-west-2
  use_spot: true
  batch_array_size: 10
  # To receive email updates on job progress accept the request to receive emails that will be sent from Amazon
  notifications_email: user@nrel.gov

postprocessing:
  aws:
    region_name: 'us-west-2'
    s3:
      bucket: resbldg-datasets
      prefix: resstock-athena/calibration_runs_new
    athena:
      glue_service_role: service-role/AWSGlueServiceRole-default
      database_name: testing
      max_crawling_time: 300 #time to wait for the crawler to complete before aborting it

The next few paragraphs will describe each section of the file and what it does.

Reference the project

First we tell it what project we’re running with the following keys:

  • buildstock_directory: The absolute (or relative to this YAML file) path of the ResStock repository.

  • project_directory: The relative (to the buildstock_directory) path of the project.

  • schema_version: The version of the project yaml file to use and validate - currently the minimum version is 0.3.

Weather Files

Each batch of simulations depends on a number of weather files. These are provided in a zip file. This can be done with one of the following keys:

  • weather_files_url: Where the zip file of weather files can be downloaded from.

  • weather_files_path: Where on this machine to find the zipped weather files. This can be absolute or relative (to this file).

Weather files for Typical Meteorological Years (TMYs) can be obtained from the NREL data catalog.

Historical weather data for Actual Meteorological Years (AMYs) can be purchased in EPW format from various private companies. NREL users of buildstock batch can use NREL-owned AMY datasets by setting weather_files_url to a zip file located on Box.

Custom Weather Files

To use your own custom weather files for a specific location, this can be done in one of two ways:

  • Rename the filename references in your local options_lookup.tsv in the resources folder to match your custom weather file names. For example, in the options_lookup tsv, the Location AL_Birmingham.Muni.AP.722280 is matched to the weather_file_name=USA_AL_Birmingham.Muni.AP.722280.epw. To update the weather file for this location, the weather_file_name field needs to be updated to match your new name specified.

  • Rename your custom .epw weather file to match the references in your local options_lookup.tsv in the resources folder.

Sampler

The sampler key defines the type of building sampler to be used for the batch of simulations. The purpose of the sampler is to enumerate the buildings and building characteristics to be run as part of the building stock simulation. It has two arguments, type and args.

  • type tells buildstockbatch which sampler class to use.

  • args are passed to the sampler to define how it is run.

Different samplers are available for ResStock and ComStock and variations thereof. Details of what samplers are available and their arguments are available at Samplers.

Workflow Generator

The workflow_generator key defines which workflow generator will be used to transform a building description from a set of high level characteristics into an OpenStudio workflow that in turn generates the detailed EnergyPlus model. It has two arguments, type and args.

  • type tells buildstockbatch which workflow generator class to use.

  • args are passed to the sampler to define how it is run.

Different workflow generators are available for ResStock and ComStock and variations thereof. Details of what workflow generators and their arguments are available at Workflow Generators.

Baseline simulations incl. sampling algorithm

Information about baseline simulations are listed under the baseline key.

  • n_buildings_represented: The number of buildings that this sample is meant to represent.

  • skip_sims: Include this key to control whether the set of baseline simulations are run. The default (i.e., when this key is not included) is to run all the baseline simulations. No results csv table with baseline characteristics will be provided when the baseline simulations are skipped.

  • custom_gems: VERY ADVANCED FEATURE - NOT SUPPORTED - ONLY ATTEMPT USING SINGULARITY CONTAINERS ON EAGLE This activates the bundle and bundle_path interfaces in the OpenStudio CLI to allow for custom gem packages (needed to support rapid development of the standards gem.) This actually works extraordinarily well if the singularity image is properly configured but that’s easier said than done. The la100 branch on the nrel/docker-openstudio repo is a starting place if this is required.

OpenStudio Version Overrides

This is a feature only used by ComStock at the moment. Please refer to the ComStock HPC documentation for additional details on the correct configuration. This is noted here to explain the presence of two keys in the version 0.2 schema: os_version and os_sha.

Upgrade Scenarios

Under the upgrades key is a list of upgrades to apply with the following properties:

  • upgrade_name: (required) The name that will be in the outputs for this upgrade scenario.

  • options: A list of options to apply as part of this upgrade.

    • option: (required) The option to apply, in the format parameter|option which can be found in options_lookup.tsv in ResStock.

    • apply_logic: Logic that defines which buildings to apply the upgrade to. See Filtering Logic for instructions.

    • costs: A list of costs for the upgrade. Multiple costs can be entered and each is multiplied by a cost multiplier, described below.

      • value: A cost for the measure, which will be multiplied by the multiplier.

      • multiplier: The cost above is multiplied by this value, which is a function of the buiding. Since there can be multiple costs, this permits both fixed and variable costs for upgrades that depend on the properties of the baseline building. The multiplier needs to be from this enumeration list inthe resstock or comstock repo or from the list in your branch of that repo.

    • lifetime: Lifetime in years of the upgrade.

  • package_apply_logic: (optional) The conditions under which this package of upgrades should be performed. See Filtering Logic.

  • reference_scenario: (optional) The upgrade_name which should act as a reference to this upgrade to calculate savings. All this does is that reference_scenario show up as a column in results csvs alongside the upgrade name; Buildstockbatch will not do the savings calculation.

Output Directory

output_directory specifies where the outputs of the simulation should be stored.

Eagle Configuration

Under the eagle key is a list of configuration for running the batch job on the Eagle supercomputer.

  • n_jobs: Number of eagle jobs to parallelize the simulation into

  • minutes_per_sim: Maximum allocated simulation time in minutes

  • account: Eagle allocation account to charge the job to

  • sampling: Configuration for the sampling in eagle

    • time: Maximum time in minutes to allocate to sampling job

  • postprocessing: Eagle configuration for the postprocessing step

    • time: Maximum time in minutes to allocate postprocessing job

    • n_workers: Number of eagle nodes to parallelize the postprocessing job into. Max supported is 32. Default is 2.

    • node_memory_mb: The memory (in MB) to request for eagle node for postprocessing. The valid values are

      85248, 180224 and 751616. Default is 85248.

    • parquet_memory_mb: The size (in MB) of the combined parquet file in memory. Default is 40000.

AWS Configuration

The top-level aws key is used to specify options for running the batch job on the AWS Batch service.

Note

Many of these options overlap with options specified in the Postprocessing section. The options here take pecedence when running on AWS. In a future version we will break backwards compatibility in the config file and have more consistent options.

  • job_identifier: A unique string that starts with an alphabetical character, is up to 10 characters long, and only has letters, numbers or underscore. This is used to name all the AWS service objects to be created and differentiate it from other jobs.

  • s3: Configuration for project data storage on s3. When running on AWS, this overrides the s3 configuration in the Postprocessing Configuration Options.

    • bucket: The s3 bucket this project will use for simulation output and processed data storage.

    • prefix: The s3 prefix at which the data will be stored.

  • region: The AWS region in which the batch will be run and data stored.

  • use_spot: true or false. Defaults to false if missing. This tells the project to use the Spot Market for data simulations, which typically yields about 60-70% cost savings.

  • spot_bid_percent: Percent of on-demand price you’re willing to pay for your simulations. The batch will wait to run until the price drops below this level.

  • batch_array_size: Number of concurrent simulations to run. Max: 10000.

  • notifications_email: Email to notify you of simulation completion. You’ll receive an email at the beginning where you’ll need to accept the subscription to receive further notification emails.

  • emr: Optional key to specify options for postprocessing using an EMR cluster. Generally the defaults should work fine.

    • manager_instance_type: The instance type to use for the EMR master node. Default: m5.xlarge.

    • worker_instance_type: The instance type to use for the EMR worker nodes. Default: r5.4xlarge.

    • worker_instance_count: The number of worker nodes to use. Same as eagle.postprocessing.n_workers. Increase this for a large dataset. Default: 2.

    • dask_worker_vcores: The number of cores for each dask worker. Increase this if your dask workers are running out of memory. Default: 2.

  • job_environment: Specifies the computing requirements for each simulation.

    • vcpus: Number of CPUs needed. default: 1.

    • memory: Amount of RAM memory needed for each simulation in MiB. default 1024. For large multifamily buildings this works better if set to 2048.

Postprocessing

After a batch of simulation completes, to analyze BuildStock results the individual simulation results are aggregated in a postprocessing step as follows:

  1. The inputs and annual outputs of each simulation are gathered together into one table for each upgrade scenario. In older versions that ran on PAT, this was known as the results.csv. This table is now made available in both csv and parquet format.

  2. Time series results for each simulation are gathered and concatenated into fewer larger parquet files that are better suited for querying using big data analysis tools.

For ResStock runs with the ResidentialScheduleGenerator, the generated schedules are horizontally concatenated with the time series files before aggregation, making sure the schedule values are properly lined up with the timestamps in the same way that EnergyPlus handles ScheduleFiles.

Uploading to AWS Athena

BuildStock results can optionally be uploaded to AWS for further analysis using Athena. This process requires appropriate access to an AWS account to be configured on your machine. You will need to set this up wherever you use buildstockbatch. If you don’t have keys, consult your AWS administrator to get them set up.

Postprocessing Configuration Options

Warning

The region_name and s3 info here are ignored when running buildstock_aws. The configuration is defined in AWS Configuration.

The configuration options for postprocessing and AWS upload are:

  • postprocessing: postprocessing configuration

    • keep_individual_timeseries: For some use cases it is useful to keep the timeseries output for each simulation as a separate parquet file. Setting this option to true allows that. Default is false.

    • aws: configuration related to uploading to and managing data in amazon web services. For this to work, please configure aws. Including this key will cause your datasets to be uploaded to AWS, omitting it will cause them not to be uploaded.

      • region_name: The name of the aws region to use for database creation and other services.

      • s3: Configurations for data upload to Amazon S3 data storage service.

        • bucket: The s3 bucket into which the postprocessed data is to be uploaded to

        • prefix: S3 prefix at which the data is to be uploaded. The complete path will become: s3://bucket/prefix/output_directory_name

      • athena: configurations for Amazon Athena database creation. If this section is missing/commented-out, no Athena tables are created.

        • glue_service_role: The data in s3 is catalogued using Amazon Glue data crawler. An IAM role must be present for the user that grants rights to Glue crawler to read s3 and create database catalogue. The name of that IAM role must be provided here. Default is: “service-role/AWSGlueServiceRole-default”. For help, consult the AWS documentation for Glue Service Roles.

        • database_name: The name of the Athena database to which the data is to be placed. All tables in the database will be prefixed with the output directory name.

        • max_crawling_time: The maximum time in seconds to wait for the glue crawler to catalogue the data before aborting it.