nominal
¤
checklist_builder
¤
checklist_builder(
name: str,
description: str = "",
assignee_email: str | None = None,
default_ref_name: str | None = None,
) -> ChecklistBuilder
Create a checklist builder to add checks and variables, and publish the checklist to Nominal.
If assignee_email is None, the checklist is assigned to the user executing the code.
Example:¤
builder = nm.checklist_builder("Programmatically created checklist")
builder.add_check(
name="derivative of cycle time is too high",
priority=2,
expression="derivative(numericChannel(channelName = 'Cycle_Time', refName = 'manufacturing')) > 0.05",
)
checklist = builder.publish()
create_asset
¤
create_asset(
name: str,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Asset
Create an asset.
create_log_set
¤
create_log_set(
name: str,
logs: (
Iterable[Log]
| Iterable[tuple[datetime | IntegralNanosecondsUTC, str]]
),
timestamp_type: LogTimestampType = "absolute",
description: str | None = None,
) -> LogSet
Create an immutable log set with the given logs.
The logs are attached during creation and cannot be modified afterwards. Logs can either be of type Log
or a tuple of a timestamp and a string. Timestamp type must be either 'absolute' or 'relative'.
create_run
¤
create_run(
name: str,
start: datetime | str | IntegralNanosecondsUTC,
end: datetime | str | IntegralNanosecondsUTC | None,
description: str | None = None,
) -> Run
Create a run in the Nominal platform.
If the run has no end (for example, if it is ongoing), use end=None
.
To add a dataset to the run, use run.add_dataset()
.
create_run_csv
¤
create_run_csv(
file: Path | str,
name: str,
timestamp_column: str,
timestamp_type: _LiteralAbsolute | Iso8601 | Epoch,
description: str | None = None,
) -> Run
Create a dataset from a CSV file, and create a run based on it.
This is a convenience function that combines upload_csv()
and create_run()
and can only be used with absolute
timestamps. For relative timestamps or custom formats, use upload_dataset()
and create_run()
separately.
The name and description are added to the run. The dataset is created with the name "Dataset for Run: {name}". The reference name for the dataset in the run is "dataset".
The run start and end times are created from the minimum and maximum timestamps in the CSV file in the timestamp column.
create_streaming_connection
¤
create_streaming_connection(
datasource_id: str,
connection_name: str,
datasource_description: str | None = None,
*,
required_tag_names: list[str] | None = None
) -> Connection
Creates a new datasource and a new connection.
datasource_id: A human readable identifier. Must be unique within an organization.
create_workbook_from_template
¤
create_workbook_from_template(
template_rid: str,
run_rid: str,
*,
title: str | None = None,
description: str | None = None,
is_draft: bool = False
) -> Workbook
Creates a new workbook from a template. template_rid: The template to use for the workbook. run_rid: The run to associate the workbook with.
data_review_builder
¤
data_review_builder() -> DataReviewBuilder
Create a batch of data reviews to be initiated together.
Example:¤
builder = nm.data_review_builder()
builder.add_integration("integration_rid")
builder.add_request("run_rid_1", "checklist_rid_1", "commit_1")
builder.add_request("run_rid_2", "checklist_rid_2", "commit_2")
reviews = builder.initiate()
for review in reviews:
print(review.get_violations())
download_attachment
¤
Retrieve an attachment from the Nominal platform and save it to file
.
get_attachment
¤
get_attachment(rid: str) -> Attachment
Retrieve an attachment from the Nominal platform by its RID.
get_connection
¤
get_connection(rid: str) -> Connection
Retrieve a connection from the Nominal platform by its RID.
get_data_review
¤
get_data_review(rid: str) -> DataReview
Retrieve a data review from the Nominal platform by its RID.
get_dataset
¤
Retrieve a dataset from the Nominal platform by its RID.
get_default_client
¤
get_default_client() -> NominalClient
Retrieve the default client to the Nominal platform.
get_log_set
¤
Retrieve a log set from the Nominal platform by its RID.
list_streaming_checklists
¤
search_assets
¤
search_assets(
*,
search_text: str | None = None,
label: str | None = None,
labels: Sequence[str] | None = None,
property: tuple[str, str] | None = None,
properties: Mapping[str, str] | None = None
) -> Sequence[Asset]
Search for assets meeting the specified filters.
Filters are ANDed together, e.g. (asset.label == label) AND (asset.search_text =~ field)
Parameters:
-
search_text
¤str | None
, default:None
) –case-insensitive search for any of the keywords in all string fields
-
label
¤str | None
, default:None
) –Deprecated, use labels instead.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a asset to be included.
-
property
¤tuple[str, str] | None
, default:None
) –Deprecated, use properties instead.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a asset to be included.
Returns:
search_runs
¤
search_runs(
*,
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
name_substring: str | None = None,
label: str | None = None,
labels: Sequence[str] | None = None,
property: tuple[str, str] | None = None,
properties: Mapping[str, str] | None = None
) -> Sequence[Run]
Search for runs meeting the specified filters.
Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)
Parameters:
-
start
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive start time for filtering runs.
-
end
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive end time for filtering runs.
-
name_substring
¤str | None
, default:None
) –Searches for a (case-insensitive) substring in the name
-
label
¤str | None
, default:None
) –Deprecated, use labels instead.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a run to be included.
-
property
¤tuple[str, str] | None
, default:None
) –Deprecated, use properties instead.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a run to be included.
Returns:
set_base_url
¤
set_base_url(base_url: str) -> None
Set the default Nominal platform base url.
For production environments: "https://api.gov.nominal.io/api". For staging environments: "https://api-staging.gov.nominal.io/api". For local development: "https://api.nominal.test".
set_token
¤
Set the default token to be used in association with a given base url.
Use in conjunction with set_base_url()
.
upload_attachment
¤
upload_attachment(
file: Path | str, name: str, description: str | None = None
) -> Attachment
Upload an attachment to the Nominal platform.
upload_csv
¤
upload_csv(
file: Path | str,
name: str | None,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a .csv or .csv.gz file.
If name
is None, the dataset is created with the name of the file.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_mcap_video
¤
upload_mcap_video(
file: Path | str,
topic: str,
name: str | None = None,
description: str | None = None,
*,
wait_until_complete: bool = True
) -> Video
Create a video in the Nominal platform from a topic in a mcap file.
If name
is None, the video is created with the name of the file.
If wait_until_complete=True
(the default), this function waits until the video has completed ingestion before
returning. If you are uploading many videos, set wait_until_complete=False
instead and call
wait_until_ingestion_complete()
after uploading all videos to allow for parallel ingestion.
upload_pandas
¤
upload_pandas(
df: DataFrame,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a pandas.DataFrame.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_polars
¤
upload_polars(
df: DataFrame,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
channel_name_delimiter: str | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a polars.DataFrame.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.
upload_tdms
¤
upload_tdms(
file: Path | str,
name: str | None = None,
description: str | None = None,
timestamp_column: str | None = None,
timestamp_type: _AnyTimestampType | None = None,
*,
wait_until_complete: bool = True
) -> Dataset
Create a dataset in the Nominal platform from a tdms file.
If name
is None, the dataset is created with the name of the file with a .csv suffix.
If 'timestamp_column' is provided, it must be present in every group and the length of all data columns must be equal to (and aligned with) with 'timestamp_column'.
If 'timestamp_column' is None, TDMS channel properties must have both a wf_increment
and wf_start_time
property to be included in the dataset.
Note that both 'timestamp_column' and 'timestamp_type' must be included together, or excluded together.
Channels will be named as f"{group_name}.{channel_name}" with spaces replaced with underscores.
If wait_until_complete=True
(the default), this function waits until the dataset has completed ingestion before
returning. If you are uploading many datasets, set wait_until_complete=False
instead and call
wait_until_ingestions_complete()
after uploading all datasets to allow for parallel ingestion.