Downloading building data
Timeseries data can be connected to points within DCH, with the ability to upload and download this data via use of DCH's point data API. Multiple output and input formats are supported, including JSON and CSV. When downloading, the amount of data returned, the time period in which to gather data from, and which points to gather data from, can all be configured. For more information regarding configuring the download endpoint, see the download endpoint.
Downloading data via DCH API
As JSON
Data is gathered for a given point using a point identifier. To find out more about finding points within DCH, see Walkthrough: Finding points. Suppose the following is received as a result of sending a GET request to DCH's /points
endpoint:
...
{
"id": "dsapi-big-funny-plant-stream1",
"uid": "33b199a8-9eaa-4d75-b4dc-c70aa66a7570",
"organisationId": "csiro",
"datapoolId": "csiro:managed_datapool",
"name": "dsapi-big-funny-plant-stream1",
"type": "brick:Point",
"unit": null,
"compositeId": "csiro:managed_datapool:dsapi-little-filthy-wonder-stream1"
},
...
Within the response there are multiple identifiers, each with a different purpose:
id
: Simple non-unique identifier for the pointuid
: Universally unique identifier (UUID4) for the pointorganisationId
: The unique identifier for the organisation in which the point belongs todatapoolId
: Unique identifier for the datapool within which the point is storedcompositeId
: A unique identifier for the point, composed of the organisation identifier, datapool identifier and point non-unique identifier delimited by a colon, i.e.<organisationId>:<datapoolId>:<id>
Therefore, the two unique identifiers we can use to refer to a point are uid and compositeId. DCH accepts both of these forms of identifiers to refer to a point. Now that we have gathered a point ID, we can gather point data from DCH.
Using composite point identifier
Firstly, we need to ensure we have a valid API key, for more information about this see Walkthrough: Authenticating with DCH APIs. Using this API key, we make a POST request to DCH's /observations/download
endpoint, specifying the composite ID in the points field of the request body. Additionally, we have limited the number of observations returned to 3 using the limit
query parameter, and requested to receive the response in JSON format using the accept
header. For other valid formats see the DCH's download endpoint.
Get observations via composite ID - request:
curl -X 'POST' \
'https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3' \
-H 'accept: application/json' \
-H 'X-Api-Key: <MY API KEY>' \
-H 'Content-Type: application/json' \
-d '{
"points": [
"csiro:managed_datapool:dsapi-big-funny-plant-stream1"
]
}'
The response received for this point is:
Get observations via composite ID - response:
{
"metadata": {
"$schema": "v0.1.1",
"startTime": "2023-06-12T02:00:00.000Z",
"endTime": "2023-12-10T02:00:00.000Z",
"count": 3,
"points": {
"0": "csiro:managed_datapool:dsapi-big-funny-plant-stream1"
},
"self": "https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3"
},
"data": [
{
"t": "2023-06-12T02:00:00.000Z",
"p": "0",
"n": 1
},
{
"t": "2023-06-12T03:00:00.000Z",
"p": "0",
"n": 10
},
{
"t": "2023-12-10T02:00:00.000Z",
"p": "0",
"n": 12
}
]
}
Using unique point identifier
We can also use the uid to gather the equivalent data.
Get observations via UID - request
curl -X 'POST' \
'https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3' \
-H 'accept: application/json' \
-H 'X-Api-Key: <MY API KEY>' \
-H 'Content-Type: application/json' \
-d '{
"points": [
"33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
]
}'
Get observations via UID - response:
{
"metadata": {
"$schema": "v0.1.1",
"startTime": "2023-06-12T02:00:00.000Z",
"endTime": "2023-12-10T02:00:00.000Z",
"count": 3,
"points": {
"0": "33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
},
"self": "https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3"
},
"data": [
{
"t": "2023-06-12T02:00:00.000Z",
"p": "0",
"n": 1
},
{
"t": "2023-06-12T03:00:00.000Z",
"p": "0",
"n": 10
},
{
"t": "2023-12-10T02:00:00.000Z",
"p": "0",
"n": 12
}
]
}
Paginating JSON data
Similar to the above example, we can perform the same request.
curl -X 'POST' \
'https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3' \
-H 'accept: application/json' \
-H 'X-Api-Key: <MY API KEY>' \
-H 'Content-Type: application/json' \
-d '{
"points": [
"33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
]
}'
Which will result in an identical response:
{
"metadata": {
"$schema": "v0.1.1",
"startTime": "2023-06-12T02:00:00.000Z",
"endTime": "2023-06-12T04:00:00.000Z",
"count": 3,
"points": {
"0": "33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
},
"self": "https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3"
},
"data": [
{ "t": "2023-06-12T02:00:00.000Z", "p": "0", "n": 1 },
{ "t": "2023-06-12T03:00:00.000Z", "p": "0", "n": 10 },
{ "t": "2023-06-12T04:00:00.000Z", "p": "0", "n": 12 }
]
}
However, we can also interrogate the response headers:
access-control-allow-origin: *
content-length: 486
content-type: application/json
date: Thu,09 Jan 2025 02:20:21 GMT
link: <https://dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=7c3c223a40c1d219a8d69fa1276979a6efb061eb2d43628ba970b7d741317526>; rel="next"
server: APISIX/3.6.0
vary: Origin
x-envoy-upstream-service-time: 142
Particularly, there exists a link
header, which conforms to the standard approach to web linking described here. The link header contains links to relevant resources, in this case, there is a link, https://dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=7c3c223a40c1d219a8d69fa1276979a6efb061eb2d43628ba970b7d741317526
which relates to the next page of data, as described by the rel="next"
portion of the header.
We can then request the page referred to by the link header:
curl -X 'POST' \
'https://dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=7c3c223a40c1d219a8d69fa1276979a6efb061eb2d43628ba970b7d74131' \
-H 'accept: application/json' \
-H 'X-Api-Key: <MY API KEY>' \
-H 'Content-Type: application/json'
}'
You may notice that we no longer supply any query parameters, and we have also removed the points specification within the request payload. We can omit the query parameters and payload, as we are now passing the continuationToken
, which stores the original query parameters along with the point identifiers being requested. The response of this request is as follows:
{
"metadata": {
"$schema": "v0.1.1",
"startTime": "2023-06-12T05:00:00.000Z",
"endTime": "2023-06-12T07:00:00.000Z",
"count": 3,
"points": {
"0": "33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
},
"self": "https://develop.dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=7c3c223a40c1d219a8d69fa1276979a6efb061eb2d43628ba970b7d741317526"
},
"data": [
{ "t": "2023-06-12T05:00:00.000Z", "p": "0", "n": 0.013089595571344441 },
{ "t": "2023-06-12T06:00:00.000Z", "p": "0", "n": 0.01745240643728351 },
{ "t": "2023-06-12T07:00:00.000Z", "p": "0", "n": 0.02181488503456112 }
]
}
Which demonstrates that the limit parameter that we originally set to 3
is still being respected. Looking at the response headers:
access-control-allow-origin: *
content-length: 576
content-type: application/json
date: Thu,09 Jan 2025 03:32:12 GMT
link: <https://develop.dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=9f31db6224fcc8ccb35040328c5d7a2f4b840960601910599ed59d6e83d90ac4>; rel="next",<https://develop.dataclearinghouse.org/api/chronos/v1/observations/download?continuationToken=7c3c223a40c1d219a8d69fa1276979a6efb061eb2d43628ba970b7d741317526>; rel="this"
server: APISIX/3.6.0
vary: Origin
x-envoy-upstream-service-time: 160
There are two link headers shown, one for the next page, as we used on the previous page to get this response, along with a this
header which describes the link which can be used to gather the response as seen above.
As CSV
Similar to the JSON example above, we can control the data type returned using the accept
header parameter. Here we set the accept
header to text/csv.
Get observations via UID - request:
curl -X 'POST' \
'https://dataclearinghouse.org/api/chronos/v1/observations/download?limit=3' \
-H 'accept: text/csv' \
-H 'X-Api-Key: <MY API KEY>' \
-H 'Content-Type: application/json' \
-d '{
"points": [
"33b199a8-9eaa-4d75-b4dc-c70aa66a7570"
]
}'
From this request, we receive the same data in CSV format.
t,n,v,d,a,point
2023-06-12T02:00:00.000Z,1.0,,,,33b199a8-9eaa-4d75-b4dc-c70aa66a7570
2023-06-12T03:00:00.000Z,10.0,,,,33b199a8-9eaa-4d75-b4dc-c70aa66a7570
2023-12-10T02:00:00.000Z,12.0,,,,33b199a8-9eaa-4d75-b4dc-c70aa66a7570
In tabular format, we can more easily see that we have not received any vector "v
", document "d
" or annotation "a
" data for the given period.
2023-06-12T02:00:00.000Z
1.0
33b199a8-9eaa-4d75-b4dc-c70aa66a7570
2023-06-12T03:00:00.000Z
10.0
33b199a8-9eaa-4d75-b4dc-c70aa66a7570
2023-12-10T02:00:00.000Z
12.0
33b199a8-9eaa-4d75-b4dc-c70aa66a7570
Last updated