core
¤
Attachment
dataclass
¤
Attachment(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
get_contents
¤
get_contents() -> BinaryIO
Retrieve the contents of this attachment. Returns a file-like object in binary mode for reading.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace attachment metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b", *attachment.labels]
attachment = attachment.update(labels=new_labels)
Channel
dataclass
¤
Channel(
rid: str,
name: str,
data_source: str,
data_type: ChannelDataType | None,
unit: str | None,
description: str | None,
_clients: _Clients,
)
Bases: HasRid
Metadata for working with channels.
to_pandas
¤
to_pandas(
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
) -> Series[Any]
Retrieve the channel data as a pandas.Series.
The index of the series is the timestamp of the data. The index name is "timestamp" and the series name is the channel name.
Example:¤
s = channel.to_pandas()
print(s.name, "mean:", s.mean())
Connection
dataclass
¤
Dataset
dataclass
¤
Dataset(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
bounds: DatasetBounds | None,
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a URL to the page in the nominal app containing this dataset
add_csv_to_dataset
¤
add_csv_to_dataset(
path: Path | str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
) -> None
Append to a dataset from a csv on-disk.
add_to_dataset_from_io
¤
add_to_dataset_from_io(
dataset: BinaryIO,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
file_type: tuple[str, str] | FileType = FileTypes.CSV,
) -> None
Append to a dataset from a file-like object.
file_type: a (extension, mimetype) pair describing the type of file.
get_channels
¤
Look up the metadata for all matching channels associated with this dataset. NOTE: Provided channels may also be associated with other datasets-- use with caution.
exact_match: Filter the returned channels to those whose names match all provided strings
(case insensitive).
For example, a channel named 'engine_turbine_rpm' would match against ['engine', 'turbine', 'rpm'],
whereas a channel named 'engine_turbine_flowrate' would not!
fuzzy_search_text: Filters the returned channels to those whose names fuzzily match the provided string.
Yields a sequence of channel metadata objects which match the provided query parameters
poll_until_ingestion_completed
¤
poll_until_ingestion_completed(
interval: timedelta = timedelta(seconds=1),
) -> None
Block until dataset ingestion has completed. This method polls Nominal for ingest status after uploading a dataset on an interval.
Raises¤
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: Mapping[str, str | None],
validate_schema: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
to_pandas
¤
to_pandas(
channel_exact_match: Sequence[str] = (),
channel_fuzzy_search_text: str = "",
) -> DataFrame
Download a dataset to a pandas dataframe, optionally filtering for only specific channels of the dataset.
channel_exact_match: Filter the returned channels to those whose names match all provided strings
(case insensitive).
For example, a channel named 'engine_turbine_rpm' would match against ['engine', 'turbine', 'rpm'],
whereas a channel named 'engine_turbine_flowrate' would not!
channel_fuzzy_search_text: Filters the returned channels to those whose names fuzzily match the provided
string.
A pandas dataframe whose index is the timestamp of the data, and column names match those of the selected
channels.
Example:¤
import nominal as nm
rid = "..." # Taken from the UI or via the SDK
dataset = nm.get_dataset(rid)
s = dataset.to_pandas()
print("index:", s.index, "index mean:", s.index.mean())
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace dataset metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in dataset.labels:
new_labels.append(old_label)
dataset = dataset.update(labels=new_labels)
LogSet
dataclass
¤
NominalClient
dataclass
¤
NominalClient(_clients: ClientsBunch)
checklist_builder
¤
checklist_builder(
name: str,
description: str = "",
assignee_email: str | None = None,
assignee_rid: str | None = None,
default_ref_name: str | None = None,
) -> ChecklistBuilder
Creates a checklist builder.
You can provide one of assignee_email
or assignee_rid
. If neither are provided, the rid for the user
executing the script will be used as the assignee. If both are provided, a ValueError is raised.
create
classmethod
¤
create(
base_url: str,
token: str | None,
trust_store_path: str | None = None,
connect_timeout: float = 30,
) -> Self
Create a connection to the Nominal platform.
base_url: The URL of the Nominal API platform, e.g. "https://api.gov.nominal.io/api". token: An API token to authenticate with. By default, the token will be looked up in ~/.nominal.yml. trust_store_path: path to a trust store CA root file to initiate SSL connections. If not provided, certifi's trust store is used.
create_attachment_from_io
¤
create_attachment_from_io(
attachment: BinaryIO,
name: str,
file_type: tuple[str, str] | FileType = FileTypes.BINARY,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Attachment
Upload an attachment. The attachment must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
create_csv_dataset
¤
create_csv_dataset(
path: Path | str,
name: str | None,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Dataset
Create a dataset from a CSV file.
If name is None, the name of the file will be used.
See create_dataset_from_io
for more details.
create_dataset_from_io
¤
create_dataset_from_io(
dataset: BinaryIO,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
file_type: tuple[str, str] | FileType = FileTypes.CSV,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Dataset
Create a dataset from a file-like object. The dataset must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
Timestamp column types must be a CustomTimestampFormat
or one of the following literals:
"iso_8601": ISO 8601 formatted strings,
"epoch_{unit}": epoch timestamps in UTC (floats or ints),
"relative_{unit}": relative timestamps (floats or ints),
where {unit} is one of: nanoseconds | microseconds | milliseconds | seconds | minutes | hours | days
create_log_set
¤
create_log_set(
name: str,
logs: (
Iterable[Log]
| Iterable[tuple[datetime | IntegralNanosecondsUTC, str]]
),
timestamp_type: LogTimestampType = "absolute",
description: str | None = None,
) -> LogSet
Create an immutable log set with the given logs.
The logs are attached during creation and cannot be modified afterwards. Logs can either be of type Log
or a tuple of a timestamp and a string. Timestamp type must be either 'absolute' or 'relative'.
create_run
¤
create_run(
name: str,
start: datetime | IntegralNanosecondsUTC,
end: datetime | IntegralNanosecondsUTC | None,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = (),
attachments: Iterable[Attachment] | Iterable[str] = ()
) -> Run
Create a run.
create_video_from_io
¤
create_video_from_io(
video: BinaryIO,
name: str,
start: datetime | IntegralNanosecondsUTC,
description: str | None = None,
file_type: tuple[str, str] | FileType = FileTypes.MP4,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create a video from a file-like object.
The video must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.
create_video_from_mcap_io
¤
create_video_from_mcap_io(
mcap: BinaryIO,
topic: str,
name: str,
description: str | None = None,
file_type: tuple[str, str] | FileType = FileTypes.MCAP,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create video from topic in a mcap file.
Mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.
If name is None, the name of the file will be used.
get_all_units
¤
get_all_units() -> Sequence[Unit]
Retrieve list of metadata for all supported units within Nominal
get_attachments
¤
get_attachments(rids: Iterable[str]) -> Sequence[Attachment]
Retrive attachments by their RIDs.
get_channel
¤
Get metadata for a given channel by looking up its rid Args: rid: Identifier for the channel to look up Returns: Resolved metadata for the requested channel Raises: conjure_python_client.ConjureHTTPError: An error occurred while looking up the channel. This typically occurs when there is no such channel for the given RID.
get_commensurable_units
¤
Get the list of units that are commensurable (convertible to/from) the given unit symbol.
get_datasets
¤
Retrieve datasets by their RIDs.
get_log_set
¤
Retrieve a log set along with its metadata given its RID.
get_unit
¤
get_unit(unit_symbol: str) -> Unit | None
Get details of the given unit symbol, or none if invalid Args: unit_symbol: Symbol of the unit to get metadata for. NOTE: This currently requires that units are formatted as laid out in the latest UCUM standards (see https://ucum.org/ucum)
Returns¤
Rendered Unit metadata if the symbol is valid and supported by Nominal, or None
if no such unit symbol matches.
search_runs
¤
search_runs(
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
name_substring: str | None = None,
label: str | None = None,
property: tuple[str, str] | None = None,
) -> Sequence[Run]
Search for runs meeting the specified filters.
Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)
- start
and end
times are both inclusive
- name_substring
: search for a (case-insensitive) substring in the name
- property
is a key-value pair, e.g. ("name", "value")
set_channel_units
¤
Sets the units for a set of channels based on user-provided unit symbols
Args:
rids_to_types: Mapping of channel RIDs -> unit symbols (e.g. 'm/s').
NOTE: Providing None
as the unit symbol clears any existing units for the channels.
Returns¤
A sequence of metadata for all updated channels
Raises: conjure_python_client.ConjureHTTPError: An error occurred while setting metadata on the channel. This typically occurs when either the units are invalid, or there are no channels with the given RIDs present.
Run
dataclass
¤
Run(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
start: IntegralNanosecondsUTC,
end: IntegralNanosecondsUTC | None,
run_number: int,
_clients: _Clients,
)
Bases: HasRid
add_attachments
¤
add_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Add attachments that have already been uploaded to this run.
attachments
can be Attachment
instances, or attachment RIDs.
add_dataset
¤
Add a dataset to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
add_datasets
¤
Add multiple datasets to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
add_log_set
¤
Add a log set to this run.
Log sets map "ref names" (their name within the run) to a Log set (or log set rid).
add_log_sets
¤
Add multiple log sets to this run.
Log sets map "ref names" (their name within the run) to a Log set (or log set rid).
archive
¤
archive() -> None
Archive this run. Archived runs are not deleted, but are hidden from the UI.
list_datasets
¤
List the datasets associated with this run. Returns (ref_name, dataset) pairs for each dataset.
remove_attachments
¤
remove_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Remove attachments from this run. Does not remove the attachments from Nominal.
attachments
can be Attachment
instances, or attachment RIDs.
update
¤
update(
*,
name: str | None = None,
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace run metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in run.labels:
new_labels.append(old_label)
run = run.update(labels=new_labels)
Video
dataclass
¤
Video(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
archive
¤
archive() -> None
Archives the video, preventing it from showing up in the 'All Videos' pane in the UI.
poll_until_ingestion_completed
¤
poll_until_ingestion_completed(
interval: timedelta = timedelta(seconds=1),
) -> None
Block until video ingestion has completed. This method polls Nominal for ingest status after uploading a video on an interval.
Raises¤
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
unarchive
¤
unarchive() -> None
Unarchives the video, allowing it to show up in the 'All Videos' pane inthe UI.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace video metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in video.labels:
new_labels.append(old_label)
video = video.update(labels=new_labels)
poll_until_ingestion_completed
¤
poll_until_ingestion_completed(
datasets: Iterable[Dataset],
interval: timedelta = timedelta(seconds=1),
) -> None
Block until all dataset ingestions have completed (succeeded or failed).
This method polls Nominal for ingest status on each of the datasets on an interval. No specific ordering is guaranteed, but all datasets will be checked at least once.
Raises¤
NominalIngestMultiError: if any of the datasets failed to ingest