core
¤
Asset
dataclass
¤
Asset(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this Asset in the Nominal app
add_attachments
¤
add_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Add attachments that have already been uploaded to this asset.
attachments
can be Attachment
instances, or attachment RIDs.
add_connection
¤
add_connection(
data_scope_name: str,
connection: Connection | str,
*,
series_tags: dict[str, str] | None = None
) -> None
Add a connection to this asset.
Data_scope_name maps "data scope name" (the name within the asset) to a Connection (or connection rid). The same type of connection should use the same data scope name across assets, since checklists and templates use data scope names to reference connections.
add_dataset
¤
Add a dataset to this asset.
Assets map "data_scope_name" (their name within the asset) to a Dataset (or dataset rid). The same type of datasets should use the same data scope name across assets, since checklists and templates use data scope names to reference datasets.
add_log_set
¤
Add a log set to this asset.
Log sets map "ref names" (their name within the run) to a Log set (or log set rid).
add_video
¤
Add a video to this asset.
Assets map "data_scope_name" (name within the asset for the data) to a Video (or a video rid). The same type of videos (e.g., files from a given camera) should use the same data scope name across assets, since checklists and templates use data scope names to reference videos.
archive
¤
archive() -> None
Archive this asset. Archived assets are not deleted, but are hidden from the UI.
get_connection
¤
get_connection(data_scope_name: str) -> Connection
Retrieve a connection by data scope name, or raise ValueError if one is not found.
get_data_scope
¤
get_data_scope(data_scope_name: str) -> ScopeType
Retrieve a datascope by data scope name, or raise ValueError if one is not found.
get_dataset
¤
Retrieve a dataset by data scope name, or raise ValueError if one is not found.
get_video
¤
Retrieve a video by data scope name, or raise ValueError if one is not found.
list_connections
¤
list_connections() -> Sequence[tuple[str, Connection]]
List the connections associated with this asset. Returns (data_scope_name, connection) pairs for each connection.
list_data_scopes
¤
List scopes associated with this asset. Returns (data_scope_name, scope) pairs, where scope can be a dataset, connection, video, or logset.
list_datasets
¤
List the datasets associated with this asset. Returns (data_scope_name, dataset) pairs for each dataset.
list_logsets
¤
List the logsets associated with this asset. Returns (data_scope_name, logset) pairs for each logset.
list_videos
¤
List the videos associated with this asset. Returns (data_scope_name, dataset) pairs for each video.
remove_attachments
¤
remove_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Remove attachments from this asset. Does not remove the attachments from Nominal.
attachments
can be Attachment
instances, or attachment RIDs.
remove_data_scopes
¤
remove_data_scopes(
*,
names: Sequence[str] | None = None,
scopes: Sequence[ScopeType | str] | None = None
) -> None
Remove data scopes from this asset.
names
are scope names.
scopes
are rids or scope objects.
remove_data_sources
¤
remove_data_sources(
*,
data_scope_names: Sequence[str] | None = None,
data_sources: Sequence[ScopeType | str] | None = None
) -> None
Remove data sources from this asset.
The list data_sources can contain Connection, Dataset, Video instances, or rids as string.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None,
links: Sequence[str] | Sequence[Link] | None = None
) -> Self
Replace asset metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Links can be URLs or tuples of (URL, name).
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in asset.labels:
new_labels.append(old_label)
asset = asset.update(labels=new_labels)
Attachment
dataclass
¤
Attachment(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
archive
¤
archive() -> None
Archive this attachment. Archived attachments are not deleted, but are hidden from the UI.
get_contents
¤
get_contents() -> BinaryIO
Retrieve the contents of this attachment. Returns a file-like object in binary mode for reading.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace attachment metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b", *attachment.labels]
attachment = attachment.update(labels=new_labels)
Channel
dataclass
¤
Channel(
name: str,
data_source: str,
data_type: ChannelDataType | None,
unit: str | None,
description: str | None,
_clients: _Clients,
_rid: str,
)
Metadata for working with channels.
get_decimated
¤
get_decimated(
start: str | datetime | IntegralNanosecondsUTC,
end: str | datetime | IntegralNanosecondsUTC,
*,
buckets: int | None = None,
resolution: int | None = None
) -> DataFrame
Retrieve the channel data as a pandas.DataFrame, decimated to the given buckets or resolution.
to_pandas
¤
to_pandas(
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
) -> Series[Any]
Retrieve the channel data as a pandas.Series.
update
¤
update(
*, description: str | None = None, unit: str | None = None
) -> Self
Replace channel metadata within Nominal, and updates / returns the local instance.
Only the metadata passed in will be replaced, the rest will remain untouched.
Parameters:
Checklist
dataclass
¤
Checklist(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
archive
¤
archive() -> None
Archive this checklist. Archived checklists are not deleted, but are hidden from the UI.
execute
¤
Execute a checklist against a run.
Parameters:
-
run
¤Run | str
) –Run (or its rid) to execute the checklist against
-
commit
¤str | None
, default:None
) –Commit hash of the version of the checklist to run, or None for the latest version
Returns:
-
DataReview
–Created datareview for the checklist execution
execute_streaming
¤
execute_streaming(
assets: Sequence[Asset | str],
integration_rids: Sequence[str],
*,
evaluation_delay: timedelta = timedelta(),
recovery_delay: timedelta = timedelta(seconds=15)
) -> None
Execute the checklist for the given assets.
- assets
: Can be Asset
instances, or Asset RIDs.
- integration_rids
: Checklist violations will be sent to the specified integrations. At least one integration
must be specified. See https://app.gov.nominal.io/settings/integrations for a list of available integrations.
- evaluation_delay
: Delays the evaluation of the streaming checklist. This is useful for when data is delayed.
- recovery_delay
: Specifies the minimum amount of time that must pass before a check can recover from a
failure. Minimum value is 15 seconds.
stop_streaming_for_assets
¤
Stop the checklist for the given assets.
Connection
dataclass
¤
Connection(
rid: str,
_clients: _Clients,
name: str,
description: str | None,
_tags: Mapping[str, Sequence[str]],
)
Bases: DataSource
archive
¤
archive() -> None
Archive this connection. Archived connections are not deleted, but are hidden from the UI.
get_channels
¤
Look up the metadata for all matching channels associated with this datasource
names: List of channel names to look up metadata for.
Yields a sequence of channel metadata objects which match the provided query parameters
search_channels
¤
search_channels(
exact_match: Sequence[str] = (), fuzzy_search_text: str = ""
) -> Iterable[Channel]
Look up channels associated with a datasource.
Parameters:
-
exact_match
¤Sequence[str]
, default:()
) –Filter the returned channels to those whose names match all provided strings (case insensitive).
-
fuzzy_search_text
¤str
, default:''
) –Filters the returned channels to those whose names fuzzily match the provided string.
Yields:
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: Mapping[str, str | None],
validate_schema: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
to_pandas
¤
to_pandas(
channel_exact_match: Sequence[str] = (),
channel_fuzzy_search_text: str = "",
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
tags: dict[str, str] | None = None,
) -> DataFrame
Download a dataset to a pandas dataframe, optionally filtering for only specific channels of the dataset.
DataReview
dataclass
¤
DataReview(
rid: str,
run_rid: str,
checklist_rid: str,
checklist_commit: str,
completed: bool,
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this Data Review in the Nominal app
archive
¤
archive() -> None
Archive this data review. Archived data reviews are not deleted, but are hidden from the UI.
NOTE: currently, it is not possible (yet) to unarchive a data review once archived.
get_violations
¤
get_violations() -> Sequence[CheckViolation]
Retrieves the list of check violations for the data review.
poll_for_completion
¤
poll_for_completion(
interval: timedelta = timedelta(seconds=2),
) -> DataReview
Polls the data review until it is completed.
DataReviewBuilder
dataclass
¤
DataReviewBuilder(
_integration_rids: list[str],
_requests: list[CreateDataReviewRequest],
_clients: _Clients,
)
initiate
¤
initiate(wait_for_completion: bool = True) -> Sequence[DataReview]
Dataset
dataclass
¤
Dataset(
rid: str,
_clients: _Clients,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
bounds: DatasetBounds | None,
)
Bases: DataSource
nominal_url
property
¤
nominal_url: str
Returns a URL to the page in the nominal app containing this dataset
add_ardupilot_dataflash_to_dataset
¤
Add a Dataflash file to an existing dataset.
add_csv_to_dataset
¤
add_csv_to_dataset(
path: Path | str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
) -> None
Append to a dataset from a csv on-disk.
add_data_to_dataset
¤
add_data_to_dataset(
path: Path | str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
) -> None
Append to a dataset from a tabular data file on-disk.
add_journal_json_to_dataset
¤
Add a journald jsonl file to an existing dataset.
add_mcap_to_dataset
¤
add_mcap_to_dataset(
path: Path | str,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
) -> None
Add an MCAP file to an existing dataset.
path: Path to the MCAP file to add to this dataset
include_topics: If present, list of topics to restrict ingestion to.
If not present, defaults to all protobuf-encoded topics present in the MCAP.
exclude_topics: If present, list of topics to not ingest from the MCAP.
add_mcap_to_dataset_from_io
¤
add_mcap_to_dataset_from_io(
mcap: BinaryIO,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
file_name: str | None = None,
) -> None
Add data to this dataset from an MCAP file-like object.
The mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
mcap: Binary file-like MCAP stream
include_topics: If present, list of topics to restrict ingestion to.
If not present, defaults to all protobuf-encoded topics present in the MCAP.
exclude_topics: If present, list of topics to not ingest from the MCAP.
file_name: If present, name to use when uploading file. Otherwise, defaults to dataset name.
add_tabular_data_to_dataset
¤
add_tabular_data_to_dataset(
path: Path | str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
) -> None
Append to a dataset from tabular data on-disk.
Currently, the supported filetypes are: - .csv / .csv.gz - .parquet
Parameters:
-
path
¤Path | str
) –Path to the file on disk to add to the dataset.
-
timestamp_column
¤str
) –Column within the file containing timestamp information. NOTE: this is omitted as a channel from the data added to Nominal, and is instead used to set the timestamps for all other uploaded data channels.
-
timestamp_type
¤_AnyTimestampType
) –Type of timestamp data contained within the
timestamp_column
e.g. 'epoch_seconds'.
add_to_dataset_from_io
¤
add_to_dataset_from_io(
dataset: BinaryIO,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
file_type: tuple[str, str] | FileType = CSV,
file_name: str | None = None,
) -> None
Append to a dataset from a file-like object.
file_type: a (extension, mimetype) pair describing the type of file.
archive
¤
archive() -> None
Archive this dataset. Archived datasets are not deleted, but are hidden from the UI.
get_channels
¤
get_channels(
exact_match: Sequence[str] = (),
fuzzy_search_text: str = "",
*,
names: Iterable[str] | None = None
) -> Iterable[Channel]
Look up the metadata for all matching channels associated with this dataset.
exact_match: Filter the returned channels to those whose names match all provided strings
(case insensitive).
For example, a channel named 'engine_turbine_rpm' would match against ['engine', 'turbine', 'rpm'],
whereas a channel named 'engine_turbine_flowrate' would not!
fuzzy_search_text: Filters the returned channels to those whose names fuzzily match the provided string.
names: List of channel names to look up metadata for. This parameter is preferred over
exact_match and fuzzy_search_text, which are deprecated.
Yields a sequence of channel metadata objects which match the provided query parameters
poll_until_ingestion_completed
¤
Block until dataset ingestion has completed. This method polls Nominal for ingest status after uploading a dataset on an interval.
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
search_channels
¤
search_channels(
exact_match: Sequence[str] = (), fuzzy_search_text: str = ""
) -> Iterable[Channel]
Look up channels associated with a datasource.
Parameters:
-
exact_match
¤Sequence[str]
, default:()
) –Filter the returned channels to those whose names match all provided strings (case insensitive).
-
fuzzy_search_text
¤str
, default:''
) –Filters the returned channels to those whose names fuzzily match the provided string.
Yields:
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: Mapping[str, str | None],
validate_schema: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
to_pandas
¤
to_pandas(
channel_exact_match: Sequence[str] = (),
channel_fuzzy_search_text: str = "",
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
tags: dict[str, str] | None = None,
) -> DataFrame
Download a dataset to a pandas dataframe, optionally filtering for only specific channels of the dataset.
unarchive
¤
unarchive() -> None
Unarchives this dataset, allowing it to show up in the 'All Datasets' pane in the UI.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace dataset metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in dataset.labels:
new_labels.append(old_label)
dataset = dataset.update(labels=new_labels)
LogSet
dataclass
¤
NominalClient
dataclass
¤
NominalClient(_clients: ClientsBunch)
create
classmethod
¤
create(
base_url: str,
token: str | None,
trust_store_path: str | None = None,
connect_timeout: float = 30,
) -> Self
Create a connection to the Nominal platform.
base_url: The URL of the Nominal API platform, e.g. "https://api.gov.nominal.io/api". token: An API token to authenticate with. By default, the token will be looked up in ~/.nominal.yml. trust_store_path: path to a trust store CA root file to initiate SSL connections. If not provided, certifi's trust store is used.
create_ardupilot_dataflash_dataset
¤
create_ardupilot_dataflash_dataset(
path: Path | str,
name: str | None,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None
) -> Dataset
Create a dataset from an ArduPilot DataFlash log file.
If name is None, the name of the file will be used.
See create_dataset_from_io
for more details.
create_asset
¤
create_asset(
name: str,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Asset
Create an asset.
create_attachment_from_io
¤
create_attachment_from_io(
attachment: BinaryIO,
name: str,
file_type: tuple[str, str] | FileType = BINARY,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Attachment
Upload an attachment. The attachment must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
create_csv_dataset
¤
create_csv_dataset(
path: Path | str,
name: str | None,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None,
channel_prefix: str | None = None
) -> Dataset
Create a dataset from a CSV file.
If name is None, the name of the file will be used.
See create_dataset_from_io
for more details.
create_dataset
¤
create_dataset(
name: str,
*,
description: str | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None
) -> Dataset
Create an empty dataset.
Parameters:
-
name
¤str
) –Name of the dataset to create in Nominal.
-
description
¤str | None
, default:None
) –Human readable description of the dataset.
-
labels
¤Sequence[str]
, default:()
) –Text labels to apply to the created dataset
-
properties
¤Mapping[str, str] | None
, default:None
) –Key-value properties to apply to the cleated dataset
-
prefix_tree_delimiter
¤str | None
, default:None
) –If present, the delimiter to represent tiers when viewing channels hierarchically.
Returns:
-
Dataset
–Reference to the created dataset in Nominal.
create_dataset_from_io
¤
create_dataset_from_io(
dataset: BinaryIO,
name: str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
file_type: tuple[str, str] | FileType = CSV,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None,
channel_prefix: str | None = None,
file_name: str | None = None
) -> Dataset
Create a dataset from a file-like object. The dataset must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
Timestamp column types must be a CustomTimestampFormat
or one of the following literals:
"iso_8601": ISO 8601 formatted strings,
"epoch_{unit}": epoch timestamps in UTC (floats or ints),
"relative_{unit}": relative timestamps (floats or ints),
where {unit} is one of: nanoseconds | microseconds | milliseconds | seconds | minutes | hours | days
Parameters:
-
dataset
¤BinaryIO
) –Binary file-like tabular data stream
-
name
¤str
) –Name of the dataset to create
-
timestamp_column
¤str
) –Column of data containing timestamp information for all other columns
-
timestamp_type
¤_AnyTimestampType
) –Type of timestamps contained within timestamp_column
-
file_type
¤tuple[str, str] | FileType
, default:CSV
) –Type of file being ingested (e.g. CSV, parquet, etc.). Used for naming the file uploaded to cloud storage as part of ingestion.
-
description
¤str | None
, default:None
) –Human-readable description of the dataset to create
-
labels
¤Sequence[str]
, default:()
) –Text labels to apply to the created dataset
-
properties
¤Mapping[str, str] | None
, default:None
) –Key-value properties to apply to the cleated dataset
-
prefix_tree_delimiter
¤str | None
, default:None
) –If present, the delimiter to represent tiers when viewing channels hierarchically.
-
channel_prefix
¤str | None
, default:None
) –Prefix to apply to newly created channels
-
file_name
¤str | None
, default:None
) –Name of the file (without extension) to create when uploading.
Returns:
-
Dataset
–Reference to the constructed dataset object.
create_dataset_from_mcap_io
¤
create_dataset_from_mcap_io(
dataset: BinaryIO,
name: str,
description: str | None = None,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None,
file_name: str | None = None
) -> Dataset
Create a dataset from an mcap file-like object.
The dataset must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
Parameters:
-
dataset
¤BinaryIO
) –Binary file-like MCAP stream
-
name
¤str
) –Name of the dataset to create
-
description
¤str | None
, default:None
) –Human-readable description of the dataset to create
-
include_topics
¤Iterable[str] | None
, default:None
) –If present, list of topics to restrict ingestion to. If not present, defaults to all protobuf-encoded topics present in the MCAP.
-
exclude_topics
¤Iterable[str] | None
, default:None
) –If present, list of topics to not ingest from the MCAP.
-
labels
¤Sequence[str]
, default:()
) –Text labels to apply to the created dataset
-
properties
¤Mapping[str, str] | None
, default:None
) –Key-value properties to apply to the cleated dataset
-
prefix_tree_delimiter
¤str | None
, default:None
) –If present, the delimiter to represent tiers when viewing channels hierarchically.
-
file_name
¤str | None
, default:None
) –If present, name (without extension) to use when uploading file. Otherwise, defaults to name.
Returns:
-
Dataset
–Reference to the constructed dataset object.
create_journal_json_dataset
¤
create_journal_json_dataset(
path: Path | str,
name: str | None,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None
) -> Dataset
Create a dataset from a journal log file with json output format.
Intended to be used with the recorded output of journalctl --output json ...
.
The path extension is expected to be .jsonl
or .jsonl.gz
if gzipped.
If name is None, the name of the file will be used.
See create_dataset_from_io
for more details.
create_log_set
¤
create_log_set(
name: str,
logs: (
Iterable[Log]
| Iterable[tuple[datetime | IntegralNanosecondsUTC, str]]
),
timestamp_type: LogTimestampType = "absolute",
description: str | None = None,
) -> LogSet
Create an immutable log set with the given logs.
The logs are attached during creation and cannot be modified afterwards. Logs can either be of type Log
or a tuple of a timestamp and a string. Timestamp type must be either 'absolute' or 'relative'.
create_mcap_dataset
¤
create_mcap_dataset(
path: Path | str,
name: str | None,
description: str | None = None,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None
) -> Dataset
Create a dataset from an MCAP file.
If name is None, the name of the file will be used.
See create_dataset_from_mcap_io
for more details on the other arguments.
create_run
¤
create_run(
name: str,
start: datetime | IntegralNanosecondsUTC,
end: datetime | IntegralNanosecondsUTC | None,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = (),
attachments: Iterable[Attachment] | Iterable[str] = (),
asset: Asset | str | None = None
) -> Run
Create a run.
create_tabular_dataset
¤
create_tabular_dataset(
path: Path | str,
name: str | None,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None,
channel_prefix: str | None = None
) -> Dataset
Create a dataset from a table-like file.
Currently, the supported filetypes are: - .csv / .csv.gz - .parquet
If name is None, the name of the file will be used.
See create_dataset_from_io
for more details.
create_video
¤
create_video(
path: Path | str,
name: str | None = None,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create a video from an h264/h265 encoded video file (mp4, mkv, ts, etc.).
If name is None, the name of the file will be used.
See create_video_from_io
for more details.
create_video_from_io
¤
create_video_from_io(
video: BinaryIO,
name: str,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
file_type: tuple[str, str] | FileType = MP4,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
file_name: str | None = None
) -> Video
Create a video from a file-like object. The video must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.
video: file-like object to read video data from
name: Name of the video to create in Nominal
start: Starting timestamp of the video
frame_timestamps: Per-frame timestamps (in nanoseconds since unix epoch) for every frame of the video
description: Description of the video to create in nominal
file_type: Type of data being uploaded, used for naming the file uploaded to cloud storage as part
of ingestion.
labels: Labels to apply to the video in nominal
properties: Properties to apply to the video in nominal
file_name: Name (without extension) to use when uploading the video file. Defaults to video name.
Handle to the created video
Note:¤
Exactly one of 'start' and 'frame_timestamps' **must** be provided. Most users will
want to provide a starting timestamp: frame_timestamps is primarily useful when the scale
of the video data is not 1:1 with the playback speed or non-uniform over the course of the video,
for example, 200fps video artificially slowed to 30 fps without dropping frames. This will result
in the playhead on charts within the product playing at the rate of the underlying data rather than
time elapsed in the video playback.
create_video_from_mcap
¤
create_video_from_mcap(
path: Path | str,
topic: str,
name: str | None = None,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create a video from an MCAP file containing H264 or H265 video data.
If name is None, the name of the file will be used.
See create_video_from_mcap_io
for more details.
create_video_from_mcap_io
¤
create_video_from_mcap_io(
mcap: BinaryIO,
topic: str,
name: str,
description: str | None = None,
file_type: tuple[str, str] | FileType = MCAP,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
file_name: str | None = None
) -> Video
Create video from topic in a mcap file.
Mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.
If name is None, the name of the file will be used.
get_all_units
¤
get_all_units() -> Sequence[Unit]
Retrieve list of metadata for all supported units within Nominal
get_attachments
¤
get_attachments(rids: Iterable[str]) -> Sequence[Attachment]
Retrive attachments by their RIDs.
get_channel
¤
Get metadata for a given channel by looking up its rid Args: rid: Identifier for the channel to look up Returns: Resolved metadata for the requested channel Raises: conjure_python_client.ConjureHTTPError: An error occurred while looking up the channel. This typically occurs when there is no such channel for the given RID.
get_commensurable_units
¤
Get the list of units that are commensurable (convertible to/from) the given unit symbol.
get_datasets
¤
Retrieve datasets by their RIDs.
get_log_set
¤
Retrieve a log set along with its metadata given its RID.
get_unit
¤
get_unit(unit_symbol: str) -> Unit | None
Get details of the given unit symbol, or none if invalid Args: unit_symbol: Symbol of the unit to get metadata for. NOTE: This currently requires that units are formatted as laid out in the latest UCUM standards (see https://ucum.org/ucum)
Rendered Unit metadata if the symbol is valid and supported by Nominal, or None
if no such unit symbol matches.
list_streaming_checklists
¤
search_assets
¤
search_assets(
search_text: str | None = None,
label: str | None = None,
property: tuple[str, str] | None = None,
*,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None
) -> Sequence[Asset]
Search for assets meeting the specified filters.
Filters are ANDed together, e.g. (asset.label == label) AND (asset.search_text =~ field)
Parameters:
-
search_text
¤str | None
, default:None
) –case-insensitive search for any of the keywords in all string fields
-
label
¤str | None
, default:None
) –Deprecated, use labels instead.
-
property
¤tuple[str, str] | None
, default:None
) –Deprecated, use properties instead.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a asset to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a asset to be included.
Returns:
search_checklists
¤
search_checklists(
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
) -> Sequence[Checklist]
Search for checklists meeting the specified filters.
Filters are ANDed together, e.g. (checklist.label == label) AND (checklist.search_text =~ field)
Parameters:
-
search_text
¤str | None
, default:None
) –case-insensitive search for any of the keywords in all string fields
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a checklist to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a checklist to be included.
Returns:
search_data_reviews
¤
search_data_reviews(
assets: Sequence[Asset | str] | None = None,
runs: Sequence[Run | str] | None = None,
) -> Sequence[DataReview]
Search for any data reviews present within a collection of runs and assets.
search_runs
¤
search_runs(
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
name_substring: str | None = None,
label: str | None = None,
property: tuple[str, str] | None = None,
*,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None
) -> Sequence[Run]
Search for runs meeting the specified filters.
Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)
Parameters:
-
start
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive start time for filtering runs.
-
end
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive end time for filtering runs.
-
name_substring
¤str | None
, default:None
) –Searches for a (case-insensitive) substring in the name.
-
label
¤str | None
, default:None
) –Deprecated, use labels instead.
-
property
¤tuple[str, str] | None
, default:None
) –Deprecated, use properties instead.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a run to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a run to be included.
Returns:
set_channel_units
¤
Sets the units for a set of channels based on user-provided unit symbols
Args:
rids_to_types: Mapping of channel RIDs -> unit symbols (e.g. 'm/s').
NOTE: Providing None
as the unit symbol clears any existing units for the channels.
A sequence of metadata for all updated channels
Raises: conjure_python_client.ConjureHTTPError: An error occurred while setting metadata on the channel. This typically occurs when either the units are invalid, or there are no channels with the given RIDs present.
Run
dataclass
¤
Run(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
start: IntegralNanosecondsUTC,
end: IntegralNanosecondsUTC | None,
run_number: int,
assets: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
add_attachments
¤
add_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Add attachments that have already been uploaded to this run.
attachments
can be Attachment
instances, or attachment RIDs.
add_connection
¤
add_connection(
ref_name: str,
connection: Connection | str,
*,
series_tags: dict[str, str] | None = None
) -> None
Add a connection to this run.
Ref_name maps "ref name" (the name within the run) to a Connection (or connection rid). The same type of connection should use the same ref name across runs, since checklists and templates use ref names to reference connections.
add_dataset
¤
Add a dataset to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
add_datasets
¤
Add multiple datasets to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
add_log_set
¤
Add a log set to this run.
Log sets map "ref names" (their name within the run) to a Log set (or log set rid).
add_log_sets
¤
Add multiple log sets to this run.
Log sets map "ref names" (their name within the run) to a Log set (or log set rid).
add_video
¤
Add a video to a run via video object or RID.
archive
¤
archive() -> None
Archive this run. Archived runs are not deleted, but are hidden from the UI.
NOTE: currently, it is not possible (yet) to unarchive a run once archived.
list_attachments
¤
list_attachments() -> Sequence[Attachment]
List a sequence of Attachments associated with this Run.
list_connections
¤
list_connections() -> Sequence[tuple[str, Connection]]
List the connections associated with this run. Returns (ref_name, connection) pairs for each connection
list_datasets
¤
List the datasets associated with this run. Returns (ref_name, dataset) pairs for each dataset.
list_log_sets
¤
List the log_sets associated with this run. Returns (ref_name, logset) pairs for each logset.
list_videos
¤
List a sequence of refname, Video tuples associated with this Run.
remove_attachments
¤
remove_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Remove attachments from this run. Does not remove the attachments from Nominal.
attachments
can be Attachment
instances, or attachment RIDs.
remove_data_sources
¤
remove_data_sources(
*,
ref_names: Sequence[str] | None = None,
data_sources: (
Sequence[Connection | Dataset | Video | str] | None
) = None
) -> None
Remove data sources from this run.
The list data_sources can contain Connection, Dataset, Video instances, or rids as string.
update
¤
update(
*,
name: str | None = None,
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None,
links: Sequence[str] | Sequence[Link] | None = None
) -> Self
Replace run metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Links can be URLs or tuples of (URL, name).
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in run.labels:
new_labels.append(old_label)
run = run.update(labels=new_labels)
Video
dataclass
¤
Video(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
add_file_to_video
¤
add_file_to_video(
path: Path | str,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
) -> VideoFile
Append to a video from a file-path to H264-encoded video data.
Parameters:
-
path
¤Path | str
) –Path to the video file to add to an existing video within Nominal
-
start
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Starting timestamp of the video file in absolute UTC time
-
frame_timestamps
¤Sequence[IntegralNanosecondsUTC] | None
, default:None
) –Per-frame absolute nanosecond timestamps. Most usecases should instead use the 'start' parameter, unless precise per-frame metadata is available and desired.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
Returns:
-
VideoFile
–Reference to the created video file.
add_mcap_to_video
¤
Append to a video from a file-path to an MCAP file containing video data.
Parameters:
-
path
¤Path
) –Path to the video file to add to an existing video within Nominal
-
topic
¤str
) –Topic pointing to video data within the MCAP file.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
Returns:
-
VideoFile
–Reference to the created video file.
add_mcap_to_video_from_io
¤
add_mcap_to_video_from_io(
mcap: BinaryIO,
name: str,
topic: str,
description: str | None = None,
file_type: tuple[str, str] | FileType = MCAP,
) -> VideoFile
Append to a video from a file-like binary stream with MCAP data containing video data.
Parameters:
-
mcap
¤BinaryIO
) –File-like binary object containing MCAP data to upload.
-
name
¤str
) –Name of the file to create in S3 during upload
-
topic
¤str
) –Topic pointing to video data within the MCAP file.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
-
file_type
¤tuple[str, str] | FileType
, default:MCAP
) –Metadata about the type of video (e.g. MCAP).
Returns:
-
VideoFile
–Reference to the created video file.
add_to_video_from_io
¤
add_to_video_from_io(
video: BinaryIO,
name: str,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
file_type: tuple[str, str] | FileType = MP4,
) -> VideoFile
Append to a video from a file-like object containing video data encoded in H264 or H265.
Parameters:
-
video
¤BinaryIO
) –File-like object containing video data encoded in H264 or H265.
-
name
¤str
) –Name of the file to use when uploading to S3.
-
start
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Starting timestamp of the video file in absolute UTC time
-
frame_timestamps
¤Sequence[IntegralNanosecondsUTC] | None
, default:None
) –Per-frame absolute nanosecond timestamps. Most usecases should instead use the 'start' parameter, unless precise per-frame metadata is available and desired.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
-
file_type
¤tuple[str, str] | FileType
, default:MP4
) –Metadata about the type of video file, e.g., MP4 vs. MKV.
Returns:
-
VideoFile
–Reference to the created video file.
archive
¤
archive() -> None
Archive this video. Archived videos are not deleted, but are hidden from the UI.
poll_until_ingestion_completed
¤
Block until video ingestion has completed. This method polls Nominal for ingest status after uploading a video on an interval.
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
unarchive
¤
unarchive() -> None
Unarchives this video, allowing it to show up in the 'All Videos' pane in the UI.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace video metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in video.labels:
new_labels.append(old_label)
video = video.update(labels=new_labels)
Workbook
dataclass
¤
WriteStream
dataclass
¤
WriteStream(
batch_size: int,
max_wait: timedelta,
_process_batch: Callable[[Sequence[BatchItem]], None],
_executor: ThreadPoolExecutor,
_thread_safe_batch: ThreadSafeBatch,
_stop: Event,
_pending_jobs: BoundedSemaphore,
)
close
¤
close(wait: bool = True) -> None
Close the Nominal Stream.
Stop the process timeout thread Flush any remaining batches
create
classmethod
¤
create(
batch_size: int,
max_wait: timedelta,
process_batch: Callable[[Sequence[BatchItem]], None],
) -> Self
Create the stream.
enqueue
¤
enqueue(
channel_name: str,
timestamp: str | datetime | IntegralNanosecondsUTC,
value: float | str,
tags: dict[str, str] | None = None,
) -> None
Add a message to the queue after normalizing the timestamp to IntegralNanosecondsUTC.
The message is added to the thread-safe batch and flushed if the batch size is reached.
enqueue_batch
¤
enqueue_batch(
channel_name: str,
timestamps: Sequence[str | datetime | IntegralNanosecondsUTC],
values: Sequence[float | str],
tags: dict[str, str] | None = None,
) -> None
Add a sequence of messages to the queue by calling enqueue for each message.
The messages are added one by one (with timestamp normalization) and flushed based on the batch conditions.
flush
¤
Flush current batch of records to nominal in a background thread.
wait: If true, wait for the batch to complete uploading before returning
timeout: If wait is true, the time to wait for flush completion.
NOTE: If none, waits indefinitely.
poll_until_ingestion_completed
¤
poll_until_ingestion_completed(
datasets: Iterable[Dataset],
interval: timedelta = timedelta(seconds=1),
) -> None
Block until all dataset ingestions have completed (succeeded or failed).
This method polls Nominal for ingest status on each of the datasets on an interval. No specific ordering is guaranteed, but all datasets will be checked at least once.
NominalIngestMultiError: if any of the datasets failed to ingest