GENERATE_CLASSIFIERSET Operation

This operation is used to add classes and properties to points in Datapools. This point classification can be done 100% online using the point classifier interface of the Web-UI to create a classifier set and subsequently apply it to a Datapool. However some onboarding teams may prefer to generate the classifier sets offline. This is supported by the model generation tooling through the GENERATE_CLASSIFIERSET operation. Once a classifier set is created it can be uploaded via REST API, and applied to Datapool(s) either using the Web-UI or via REST API.

manifest.json
{
  "manifest_version": "1.0.0",
  "id_mapping": {
    "bld": "dch:org/myOrg/site/mySite/building/myBuilding#",
    "myDatapool": "dch:org/myOrg/datapool/managed_datapool_MyDataPool_12345678-9abc-def0-1234-56789abcdef0#"
  },
  "operations": [{
      "operation_type": "CREATE",
      "config": {
        "object_file": "objects.csv"
      }
    }, {
      "operation_type": "RELATE",
      "config": {
        "skeleton_file": "skeleton.csv"
      }
    }, {
      "operation_type": "POINT_CREATE_LINK_PATTERN_MATCH",
      "config": {
        "point_linkage_file": "point_linkage.csv",
        "point_list_files": ["point_list.csv"]
      }
    }, {
      "operation_type": "GENERATE_CLASSIFIERSET",
      "config": {
        "organisation": "myOrg",
        "point_class_files": ["point_class.csv"]
      }
    }]
}

All other CSVs in this example are from POINT_CREATE_LINK_PATTERN_MATCH example 4, but now we add a new point_class.csv. This CSV assigns classes to Points based on a pattern match on the end of the Point ID string.

For example, for classifying a Point with the ID mySite.myBuilding.AHU_1.Enb, the pattern Enb is matched to the class Enable_Status.

The CSV has 2 primary columns:

  1. Point_Expression: the field where you put the pattern you would like to match.

  2. Brick Class: the BRICK class you would like to assign to any Point that has matched the value in Point_Expression.

Entity Properties can also be declared in other columns to assign units, scale, offset and any other metadata as defined in BRICK. For this example, the point_class.csv looks like below:

Example

Inputs

Point_Expression
Brick Class
brick:hasUnit
brick:scaleFactor [brick:scale, brick:offset]

bom_gov_au.94870.air.air_temp

Outside_Air_Temperature_Sensor

DEG_C

5.DefVal

Temperature_Setpoint

DEG_C

AhrPbAct.In

Mode_Command

AhrPbAct.Out

Mode_Command

AhrPbAct.Reset

Reset_Command

Alm

Alarm

ClgVlv

Valve_Position_Sensor

PERCENT

[100,0]

EcoEnb

Operating_Mode_Status

Enb

Enable_Status

FilDpStp

Differential_Pressure_Setpoint

KiloPA

FiltDpr

Filter_Differential_Pressure_Sensor

KiloPA

HtgVlv

Valve_Position_Sensor

PERCENT

[100,0]

MaSatStp

Max_Discharge_Air_Temperature_Setpoint_Limit

DEG_C

MaxDmdEn

Operating_Mode_Status

MinSatStp

Min_Discharge_Air_Temperature_Setpoint_Limit

DEG_C

OaDmp

Damper_Position_Sensor

PERCENT

OatMaxSp

Outside_Air_Lockout_Temperature_Setpoint

DEG_C

OatMinSp

Low_Outside_Air_Temperature_Enable_Setpoint

DEG_C

OccClgSp

Cooling_Temperature_Setpoint

DEG_C

OccDb

Air_Temperature_Sensor

DEG_C

OccHtgSp

Heating_Temperature_Setpoint

DEG_C

Pb

Mode_Command

PosMinOa

Min_Position_Setpoint_Limit

PERCENT

RaDmp

Damper_Position_Sensor

PERCENT

RmSp

Room_Air_Temperature_Setpoint

DEG_C

Sat

Discharge_Air_Temperature_Sensor

DEG_C

Sts

System_Status

Tmp

Air_Temperature_Sensor

DEG_C

vOAT

Intake_Air_Temperature_Sensor

DEG_C

CLASSIFY Example 1: Full input archive

Outputs

Partial view of the output:

{
    "name": "point_class_mgtool_2.0",
    "classifiers": [
        {
            "id": "a1b2c3d4-1234-1234-abcd-123456789abc",
            "enabled": true,
            "comment": "",
            "filterBy": {
                "id": {
                    "enabled": true,
                    "value": "*Pb"
                },
                "timeseriesId": {
                    "enabled": false
                },
                "class": {
                    "enabled": false,
                    "includeChildren": false
                },
                "label": {
                    "enabled": false
                },
                "entityProperties": {
                    "enabled": false
                },
                "entity": {
                    "enabled": false
                },
                "unit": {
                    "enabled": false
                }
            },
            "executeRules": {
                "class": {
                    "enabled": true,
                    "value": "brick:Mode_Command"
                },
                "entityProperties": {
                    "enabled": false,
                    "value": null
                },
                "label": {
                    "enabled": false
                },
                "entity": {
                    "enabled": false
                },
                "unit": {
                    "enabled": false,
                    "value": null
                }
            }
        },
        ...

This set can then be uploaded to DCH via this endpoint. You either continue the rest of the classification with the UI, or by using REST API.

CLASSIFY Example 1: Full output archive

Using swagger API docs to finish point classification

Classification can be finished by using our API docs.

Step 1: authenticate

  1. On the API docs page, press Authorize.

  2. Paste your API key in the ApiKeyAuth (apiKey) field.

  3. Press Authorize.

Step 2: POST the generated classifier set

  1. Scroll down to the Classifier Set section of the docs.

  2. Press on "POST/classifiersets".

  3. Click on Try it out.

  4. Copy and Paste the contents of your classifier set JSON in the Request Body.

  5. Press Execute.

  6. A response of 201 means this step was successfully executed. Copy the id field of the response (your set ID).

Step 3: POST classification plan

At this stage, you can move to the UI for steps 3 and 4, or you can continue with the API docs.

  1. Scroll up to the Classification Plan section.

  2. Press on POST/classificationplans.

  3. Click on Try it out.

  4. Fill-in the request body

    1. Plan Scope is a list of datapools that are the target of this plan

    2. organisationId is the organisation where the datapool exists

    3. classifierSetId is the classifier set ID from step 2

  5. Press Execute.

  6. A response of 201 means this step was successfully executed. Copy the id field of the response again (your plan ID).

{
  "planScope": [
    "my_datapool_id"
  ],
  "organisationId": "my_organisation_id",
  "classifierSetId": "my_classifier_set_id_from_step_2"
}

Step 4: POST classification plan apply

  1. In the same section as step 3 (Classification Plan), press on POST/classificationplans/{classification_plan_id}/apply

  2. Click on Try it out.

  3. Paste your plan ID from step 3

  4. Press Execute.

Last updated