nominal
¤
checklist_builder
¤
checklist_builder(
name: str,
description: str = "",
assignee_email: str | None = None,
default_ref_name: str | None = None,
) -> ChecklistBuilder
Create a checklist builder to add checks and variables, and publish the checklist to Nominal.
If assignee_email is None, the checklist is assigned to the user executing the code.
Example:¤
builder = nm.checklist_builder("Programmatically created checklist")
builder.add_check(
name="derivative of cycle time is too high",
priority=2,
expression="derivative(numericChannel(channelName = 'Cycle_Time', refName = 'manufacturing')) > 0.05",
)
checklist = builder.publish()
create_asset
¤
create_asset(
name: str,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Asset
Create an asset.
create_run
¤
create_run(
name: str,
start: datetime | str | IntegralNanosecondsUTC,
end: datetime | str | IntegralNanosecondsUTC | None,
description: str | None = None,
) -> Run
Create a run in the Nominal platform.
If the run has no end (for example, if it is ongoing), use end=None
.
To add a dataset to the run, use run.add_dataset()
.
create_run_csv
¤
create_run_csv(
file: Path | str,
name: str,
timestamp_column: str,
timestamp_type: _LiteralAbsolute | Iso8601 | Epoch,
description: str | None = None,
) -> Run
Create a dataset from a CSV file, and create a run based on it.
This is a convenience function that combines upload_csv()
and create_run()
and can only be used with absolute
timestamps. For relative timestamps or custom formats, use upload_dataset()
and create_run()
separately.
The name and description are added to the run. The dataset is created with the name "Dataset for Run: {name}". The reference name for the dataset in the run is "dataset".
The run start and end times are created from the minimum and maximum timestamps in the CSV file in the timestamp column.
create_streaming_connection
¤
create_streaming_connection(
datasource_id: str,
connection_name: str,
datasource_description: str | None = None,
*,
required_tag_names: list[str] | None = None
) -> Connection
Creates a new datasource and a new connection.
datasource_id: A human readable identifier. Must be unique within an organization.
download_attachment
¤
Retrieve an attachment from the Nominal platform and save it to file
.
get_attachment
¤
get_attachment(rid: str) -> Attachment
Retrieve an attachment from the Nominal platform by its RID.
get_connection
¤
get_connection(rid: str) -> Connection
Retrieve a connection from the Nominal platform by its RID.
get_dataset
¤
Retrieve a dataset from the Nominal platform by its RID.
get_default_client
¤
get_default_client() -> NominalClient
Retrieve the default client to the Nominal platform.
get_log_set
¤
Retrieve a log set from the Nominal platform by its RID.
search_runs
¤
search_runs(
*,
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
name_substring: str | None = None,
label: str | None = None,
property: tuple[str, str] | None = None
) -> list[Run]
Search for runs meeting the specified filters.
Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)
- start
and end
times are both inclusive
- name_substring
: search for a (case-insensitive) substring in the name
- property
is a key-value pair, e.g. ("name", "value")
set_base_url
¤
set_base_url(base_url: str) -> None
Set the default Nominal platform base url.
For production environments: "https://api.gov.nominal.io/api". For staging environments: "https://api-staging.gov.nominal.io/api". For local development: "https://api.nominal.test".
set_token
¤
Set the default token to be used in association with a given base url.
Use in conjunction with set_base_url()
.
upload_attachment
¤
upload_attachment(
file: Path | str, name: str, description: str | None = None
) -> Attachment
Upload an attachment to the Nominal platform.
upload_csv
¤
upload_csv(
file: Path | str,
name: str | None,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a .csv or .csv.gz file.
If name
is None, the dataset is created with the name of the file.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_mcap_video
¤
upload_mcap_video(
file: Path | str,
topic: str,
name: str | None = None,
description: str | None = None,
*,
wait_until_complete: bool = True
) -> Video
Create a video in the Nominal platform from a topic in a mcap file.
If name
is None, the video is created with the name of the file.
If wait_until_complete=True
(the default), this function waits until the video has completed ingestion before
returning. If you are uploading many videos, set wait_until_complete=False
instead and call
wait_until_ingestion_complete()
after uploading all videos to allow for parallel ingestion.
upload_pandas
¤
upload_pandas(
df: DataFrame,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a pandas.DataFrame.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_polars
¤
upload_polars(
df: DataFrame,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a polars.DataFrame.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_tdms
¤
upload_tdms(
file: Path | str,
name: str | None = None,
description: str | None = None,
timestamp_column: str | None = None,
timestamp_type: _AnyTimestampType | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a tdms file.
If name
is None, the dataset is created with the name of the file with a .csv suffix.
If 'timestamp_column' is provided, it must be present in every group and the length of all data columns must be equal to (and aligned with) with 'timestamp_column'.
If 'timestamp_column' is None, TDMS channel properties must have both a wf_increment
and wf_start_time
property to be included in the dataset.
Note that both 'timestamp_column' and 'timestamp_type' must be included together, or excluded together.
Channels will be named as f"{group_name}.{channel_name}" with spaces replaced with underscores.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.