wps_decompose_flow_vectors¶
wps_decompose_flow_vectors is a process that runs the decompose_flow_vectors module of PCIC Climate Explorer Data Preparation Tools. Here, the client will try to connect to a remote Thunderbird instance using the url parameter.¶
In [1]:
from birdy import WPSClient
import os
from wps_tools.testing import get_target_url
from netCDF4 import Dataset
# Ensure we are in the working directory with access to the data
while os.path.basename(os.getcwd()) != "thunderbird":
os.chdir('../')
In [2]:
# NBVAL_IGNORE_OUTPUT
url = get_target_url("thunderbird")
print(f"Using thunderbird on {url}")
Using thunderbird on https://marble-dev01.pcic.uvic.ca/twitcher/ows/proxy/thunderbird/wps
In [3]:
thunderbird = WPSClient(url)
Help for individual processes can be diplayed using the ? command (ex. bird.process?).¶
In [4]:
# NBVAL_IGNORE_OUTPUT
thunderbird.decompose_flow_vectors?
Signature: thunderbird.decompose_flow_vectors( netcdf, variable, dest_file=None, loglevel='INFO', ) Docstring: Process an indexed flow direction netCDF into a vectored netCDF suitable for ncWMS display Parameters ---------- netcdf : ComplexData:mimetype:`application/x-netcdf`, :mimetype:`application/x-ogc-dods` NetCDF file variable : string netCDF variable describing flow direction dest_file : string destination netCDF file loglevel : {'CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG', 'NOTSET'}string Logging level Returns ------- output : ComplexData:mimetype:`application/x-netcdf` output netCDF file File: ~/code/thunderbird/</home/eyvorchuk/.cache/pypoetry/virtualenvs/thunderbird-7g6X3rbj-py3.10/lib/python3.10/site-packages/birdy/client/base.py-4> Type: method
We can use the docstring to ensure we provide the appropriate parameters.¶
In [5]:
daccs_host = os.getenv("DACCS_HOST", "marble-dev01.pcic.uvic.ca")
flow_vectors_file = f"https://{daccs_host}/twitcher/ows/proxy/thredds/dodsC/datasets/storage/data/projects/comp_support/daccs/test-data/sample_flow_parameters.nc"
variable = "Flow_Direction"
dest_file = "output.nc"
output = thunderbird.decompose_flow_vectors(netcdf=flow_vectors_file, variable=variable, dest_file=dest_file)
# Use asobj=True to access the output file content as a Dataset
output_data = output.get(asobj=True)[0]
Once the process has completed we can extract the results and ensure it is what we expected.¶
In [6]:
input_data = [
direction
for subarray in Dataset(flow_vectors_file).variables["Flow_Direction"]
for direction in subarray
]
output_eastward = [
x_magnitude
for subarray in output_data.variables["eastward_Flow_Direction"]
for x_magnitude in subarray
if x_magnitude != "masked"
]
output_northward = [
y_magnitude
for subarray in output_data.variables["northward_Flow_Direction"]
for y_magnitude in subarray
if y_magnitude != "masked"
]
In [7]:
# Check if input and output data sizes are matching
assert len(input_data) == len(output_eastward)
assert len(output_eastward) == len(output_northward)
In [8]:
# Check if input and output outlet positions are matching
outlets = [i for i in range(len(input_data)) if input_data[i] == 9]
# Outlets should have a flow direction of 0
eastward_outlets = [output_eastward[i] for i in range(len(input_data)) if i in outlets]
northward_outlets = [output_northward[i] for i in range(len(input_data)) if i in outlets]
expected_outlets = [0.0 for i in range(len(outlets))]
assert eastward_outlets == expected_outlets
assert northward_outlets == expected_outlets