core
¤
Asset
dataclass
¤
Asset(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this Asset in the Nominal app
add_attachments
¤
add_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Add attachments that have already been uploaded to this asset.
attachments
can be Attachment
instances, or attachment RIDs.
add_connection
¤
add_connection(
data_scope_name: str,
connection: Connection | str,
*,
series_tags: Mapping[str, str] | None = None
) -> None
Add a connection to this asset.
Data_scope_name maps "data scope name" (the name within the asset) to a Connection (or connection rid). The same type of connection should use the same data scope name across assets, since checklists and templates use data scope names to reference connections.
Parameters:
add_dataset
¤
add_dataset(
data_scope_name: str,
dataset: Dataset | str,
*,
series_tags: Mapping[str, str] | None = None
) -> None
Add a dataset to this asset.
Assets map "data_scope_name" (their name within the asset) to a Dataset (or dataset rid). The same type of datasets should use the same data scope name across assets, since checklists and templates use data scope names to reference datasets.
Parameters:
add_video
¤
Add a video to this asset.
Assets map "data_scope_name" (name within the asset for the data) to a Video (or a video rid). The same type of videos (e.g., files from a given camera) should use the same data scope name across assets, since checklists and templates use data scope names to reference videos.
archive
¤
archive() -> None
Archive this asset. Archived assets are not deleted, but are hidden from the UI.
get_connection
¤
get_connection(data_scope_name: str) -> Connection
Retrieve a connection by data scope name, or raise ValueError if one is not found.
get_data_scope
¤
get_data_scope(data_scope_name: str) -> ScopeType
Retrieve a datascope by data scope name, or raise ValueError if one is not found.
get_dataset
¤
Retrieve a dataset by data scope name, or raise ValueError if one is not found.
get_or_create_dataset
¤
get_or_create_dataset(
data_scope_name: str,
*,
name: str | None = None,
description: str | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Dataset
Retrieve a dataset by data scope name, or create a new one if it does not exist.
get_video
¤
Retrieve a video by data scope name, or raise ValueError if one is not found.
list_connections
¤
list_connections() -> Sequence[tuple[str, Connection]]
List the connections associated with this asset. Returns (data_scope_name, connection) pairs for each connection.
list_data_scopes
¤
list_datasets
¤
List the datasets associated with this asset. Returns (data_scope_name, dataset) pairs for each dataset.
list_videos
¤
List the videos associated with this asset. Returns (data_scope_name, dataset) pairs for each video.
remove_attachments
¤
remove_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Remove attachments from this asset. Does not remove the attachments from Nominal.
attachments
can be Attachment
instances, or attachment RIDs.
remove_data_scopes
¤
remove_data_scopes(
*,
names: Sequence[str] | None = None,
scopes: Sequence[ScopeType | str] | None = None
) -> None
Remove data scopes from this asset.
names
are scope names.
scopes
are rids or scope objects.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None,
links: Sequence[str] | Sequence[Link] | None = None
) -> Self
Replace asset metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Links can be URLs or tuples of (URL, name).
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in asset.labels:
new_labels.append(old_label)
asset = asset.update(labels=new_labels)
Attachment
dataclass
¤
Attachment(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
archive
¤
archive() -> None
Archive this attachment. Archived attachments are not deleted, but are hidden from the UI.
get_contents
¤
get_contents() -> BinaryIO
Retrieve the contents of this attachment. Returns a file-like object in binary mode for reading.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace attachment metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b", *attachment.labels]
attachment = attachment.update(labels=new_labels)
Channel
dataclass
¤
Channel(
name: str,
data_source: str,
data_type: ChannelDataType | None,
unit: str | None,
description: str | None,
_clients: _Clients,
)
Metadata for working with channels.
search_logs
¤
search_logs(
*,
tags: Mapping[str, str] | None = None,
regex_match: str,
start: _InferrableTimestampType | None = None,
end: _InferrableTimestampType | None = None
) -> Iterable[LogPoint]
search_logs(
*,
regex_match: str | None = None,
insensitive_match: str | None = None,
tags: Mapping[str, str] | None = None,
start: _InferrableTimestampType | None = None,
end: _InferrableTimestampType | None = None
) -> Iterable[LogPoint]
Yields logpoints from the current channel that match the provided arguments
Parameters:
-
regex_match
¤str | None
, default:None
) –If provided, a regex match to filter potential log messages by NOTE: must not be present with
insensitive_match
-
insensitive_match
¤str | None
, default:None
) –If provided, a case insensitive string that yielded logs match exactly NOTE: must not be present with
regex_match
-
tags
¤Mapping[str, str] | None
, default:None
) –Tags to filter logs from the channel with
-
start
¤_InferrableTimestampType | None
, default:None
) –Timestamp to start yielding results from. If not present, searches starting from unix epoch
-
end
¤_InferrableTimestampType | None
, default:None
) –Timestamp after which to stop yielding results from. If not present, searches until end of time.
update
¤
update(
*,
description: str | None = None,
unit: UnitLike | _NotProvided = _NotProvided()
) -> Self
Replace channel metadata within Nominal, and updates / returns the local instance.
Only the metadata passed in will be replaced, the rest will remain untouched.
Parameters:
-
description
¤str | None
, default:None
) –Human-readable description of data within the channel
-
unit
¤UnitLike | _NotProvided
, default:_NotProvided()
) –Unit symbol to apply to the channel. If unit is a string or a
Unit
, this will update the unit symbol for the channel. If unit is None, this will clear the unit symbol for the channel. If not provided (or_NotProvided
), this will leave the unit unaffected. NOTE: this is in contrast to other fields in otherupdate()
calls whereNone
is treated as a "no-op".
CheckViolation
dataclass
¤
CheckViolation(
rid: str,
check_rid: str,
name: str,
start: IntegralNanosecondsUTC,
end: IntegralNanosecondsUTC | None,
priority: Priority | None,
)
Checklist
dataclass
¤
Checklist(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this checklist in the Nominal app
archive
¤
archive() -> None
Archive this checklist. Archived checklists are not deleted, but are hidden from the UI.
execute
¤
Execute a checklist against a run.
Parameters:
-
run
¤Run | str
) –Run (or its rid) to execute the checklist against
-
commit
¤str | None
, default:None
) –Commit hash of the version of the checklist to run, or None for the latest version
Returns:
-
DataReview
–Created datareview for the checklist execution
execute_streaming
¤
execute_streaming(
assets: Sequence[Asset | str],
integration_rids: Sequence[str],
*,
evaluation_delay: timedelta = timedelta(),
recovery_delay: timedelta = timedelta(seconds=15)
) -> None
Execute the checklist for the given assets.
- assets
: Can be Asset
instances, or Asset RIDs.
- integration_rids
: Checklist violations will be sent to the specified integrations. At least one integration
must be specified. See https://app.gov.nominal.io/settings/integrations for a list of available integrations.
- evaluation_delay
: Delays the evaluation of the streaming checklist. This is useful for when data is delayed.
- recovery_delay
: Specifies the minimum amount of time that must pass before a check can recover from a
failure. Minimum value is 15 seconds.
preview_for_run_url
¤
Returns a link to the page for previewing this checklist on a given run in the Nominal app
stop_streaming_for_assets
¤
Stop the checklist for the given assets.
Connection
dataclass
¤
Bases: DataSource
archive
¤
archive() -> None
Archive this connection. Archived connections are not deleted, but are hidden from the UI.
get_channels
¤
Look up the metadata for all matching channels associated with this datasource
names: List of channel names to look up metadata for.
Yields a sequence of channel metadata objects which match the provided query parameters
get_write_stream
¤
get_write_stream(
batch_size: int = 50000,
max_wait: timedelta = timedelta(seconds=1),
data_format: Literal["json", "protobuf", "experimental"] = "json",
) -> DataStream
Stream to write timeseries data to a datasource.
Data is written asynchronously.
batch_size: How big the batch can get before writing to Nominal.
max_wait: How long a batch can exist before being flushed to Nominal.
data_format: Serialized data format to use during upload.
NOTE: selecting 'protobuf' or 'experimental' requires that `nominal` was installed
with `protos` extras.
Write stream object configured to send data to nominal. This may be used as a context manager
(so that resources are automatically released upon exiting the context), or if not used as a context
manager, should be explicitly `close()`-ed once no longer needed.
search_channels
¤
search_channels(
exact_match: Sequence[str] = (),
fuzzy_search_text: str = "",
*,
data_types: Sequence[ChannelDataType] | None = None
) -> Iterable[Channel]
Look up channels associated with a datasource.
Parameters:
-
exact_match
¤Sequence[str]
, default:()
) –Filter the returned channels to those whose names match all provided strings (case insensitive).
-
fuzzy_search_text
¤str
, default:''
) –Filters the returned channels to those whose names fuzzily match the provided string.
-
data_types
¤Sequence[ChannelDataType] | None
, default:None
) –Filter the returned channels to those that match any of the provided types
Yields:
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: UnitMapping,
validate_schema: bool = False,
allow_display_only_units: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
allow_display_only_units: If true, allow units that would be treated as display-only by Nominal.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
ContainerizedExtractor
dataclass
¤
ContainerizedExtractor(
rid: str,
name: str,
description: str | None,
image: DockerImageSource,
inputs: Sequence[FileExtractionInput],
properties: Mapping[str, str],
labels: Sequence[str],
timestamp_metadata: TimestampMetadata,
_clients: _Clients,
)
Bases: HasRid
Containerized extractor which can be used to parse custom data formats into Nominal using docker images.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None,
timestamp_metadata: TimestampMetadata | None = None,
tags: Sequence[str] | None = None,
default_tag: str | None = None
) -> Self
Returns: Updated version of this instance containing newly changed fields and their values.
DataReview
dataclass
¤
DataReview(
rid: str,
run_rid: str,
checklist_rid: str,
checklist_commit: str,
completed: bool,
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this Data Review in the Nominal app
archive
¤
archive() -> None
Archive this data review. Archived data reviews are not deleted, but are hidden from the UI.
NOTE: currently, it is not possible (yet) to unarchive a data review once archived.
get_violations
¤
get_violations() -> Sequence[CheckViolation]
Retrieves the list of check violations for the data review.
poll_for_completion
¤
poll_for_completion(
interval: timedelta = timedelta(seconds=2),
) -> DataReview
Polls the data review until it is completed.
DataReviewBuilder
dataclass
¤
DataReviewBuilder(
_integration_rids: list[str],
_requests: list[CreateDataReviewRequest],
_tags: list[str],
_clients: _Clients,
)
initiate
¤
initiate(wait_for_completion: bool = True) -> Sequence[DataReview]
DataSource
dataclass
¤
DataSource(rid: str, _clients: _Clients)
Bases: HasRid
get_channels
¤
Look up the metadata for all matching channels associated with this datasource
names: List of channel names to look up metadata for.
Yields a sequence of channel metadata objects which match the provided query parameters
get_write_stream
¤
get_write_stream(
batch_size: int = 50000,
max_wait: timedelta = timedelta(seconds=1),
data_format: Literal["json", "protobuf", "experimental"] = "json",
) -> DataStream
Stream to write timeseries data to a datasource.
Data is written asynchronously.
batch_size: How big the batch can get before writing to Nominal.
max_wait: How long a batch can exist before being flushed to Nominal.
data_format: Serialized data format to use during upload.
NOTE: selecting 'protobuf' or 'experimental' requires that `nominal` was installed
with `protos` extras.
Write stream object configured to send data to nominal. This may be used as a context manager
(so that resources are automatically released upon exiting the context), or if not used as a context
manager, should be explicitly `close()`-ed once no longer needed.
search_channels
¤
search_channels(
exact_match: Sequence[str] = (),
fuzzy_search_text: str = "",
*,
data_types: Sequence[ChannelDataType] | None = None
) -> Iterable[Channel]
Look up channels associated with a datasource.
Parameters:
-
exact_match
¤Sequence[str]
, default:()
) –Filter the returned channels to those whose names match all provided strings (case insensitive).
-
fuzzy_search_text
¤str
, default:''
) –Filters the returned channels to those whose names fuzzily match the provided string.
-
data_types
¤Sequence[ChannelDataType] | None
, default:None
) –Filter the returned channels to those that match any of the provided types
Yields:
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: UnitMapping,
validate_schema: bool = False,
allow_display_only_units: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
allow_display_only_units: If true, allow units that would be treated as display-only by Nominal.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
Dataset
dataclass
¤
Dataset(
rid: str,
_clients: _Clients,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
bounds: DatasetBounds | None,
)
Bases: DataSource
add_ardupilot_dataflash_to_dataset
class-attribute
instance-attribute
¤
add_ardupilot_dataflash_to_dataset = add_ardupilot_dataflash
add_journal_json_to_dataset
class-attribute
instance-attribute
¤
add_journal_json_to_dataset = add_journal_json
add_mcap_to_dataset_from_io
class-attribute
instance-attribute
¤
add_mcap_to_dataset_from_io = add_mcap_from_io
add_tabular_data_to_dataset
class-attribute
instance-attribute
¤
add_tabular_data_to_dataset = add_tabular_data
nominal_url
property
¤
nominal_url: str
Returns a URL to the page in the nominal app containing this dataset
add_ardupilot_dataflash
¤
add_ardupilot_dataflash(path: Path | str) -> DatasetFile
Add a Dataflash file to an existing dataset.
add_containerized
¤
add_containerized(
extractor: str | ContainerizedExtractor,
sources: Mapping[str, Path | str],
tag: str | None = None,
) -> DatasetFile
Add data from proprietary data formats using a pre-registered custom extractor.
Parameters:
-
extractor
¤str | ContainerizedExtractor
) –ContainerizedExtractor instance (or rid of one) to use for extracting and ingesting data.
-
sources
¤Mapping[str, Path | str]
) –Mapping of environment variables to source files to use with the extractor. NOTE: these must match the registered inputs of the containerized extractor exactly
-
tag
¤str | None
, default:None
) –Tag of the Docker container which hosts the extractor. NOTE: if not provided, the default registered docker tag will be used.
add_from_io
¤
add_from_io(
dataset: BinaryIO,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
file_type: tuple[str, str] | FileType = CSV,
file_name: str | None = None,
tag_columns: Mapping[str, str] | None = None,
tags: Mapping[str, str] | None = None,
) -> DatasetFile
Append to a dataset from a file-like object.
Parameters:
-
dataset
¤BinaryIO
) –a file-like object containing the data to append to the dataset.
-
timestamp_column
¤str
) –the column in the dataset that contains the timestamp data.
-
timestamp_type
¤_AnyTimestampType
) –the type of timestamp data in the dataset.
-
file_type
¤tuple[str, str] | FileType
, default:CSV
) –a (extension, mimetype) pair describing the type of file.
-
file_name
¤str | None
, default:None
) –the name of the file to upload.
-
tag_columns
¤Mapping[str, str] | None
, default:None
) –a dictionary mapping tag keys to column names.
-
tags
¤Mapping[str, str] | None
, default:None
) –key-value pairs to apply as tags to all data uniformly in the file
add_journal_json
¤
add_journal_json(path: Path | str) -> DatasetFile
Add a journald jsonl file to an existing dataset.
add_mcap
¤
add_mcap(
path: Path | str,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
) -> DatasetFile
Add an MCAP file to an existing dataset.
path: Path to the MCAP file to add to this dataset
include_topics: If present, list of topics to restrict ingestion to.
If not present, defaults to all protobuf-encoded topics present in the MCAP.
exclude_topics: If present, list of topics to not ingest from the MCAP.
add_mcap_from_io
¤
add_mcap_from_io(
mcap: BinaryIO,
include_topics: Iterable[str] | None = None,
exclude_topics: Iterable[str] | None = None,
file_name: str | None = None,
) -> DatasetFile
Add data to this dataset from an MCAP file-like object.
The mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
mcap: Binary file-like MCAP stream
include_topics: If present, list of topics to restrict ingestion to.
If not present, defaults to all protobuf-encoded topics present in the MCAP.
exclude_topics: If present, list of topics to not ingest from the MCAP.
file_name: If present, name to use when uploading file. Otherwise, defaults to dataset name.
add_tabular_data
¤
add_tabular_data(
path: Path | str,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
tag_columns: Mapping[str, str] | None = None,
tags: Mapping[str, str] | None = None,
) -> DatasetFile
Append to a dataset from tabular data on-disk.
Currently, the supported filetypes are: - .csv / .csv.gz - .parquet / .parquet.gz - .parquet.tar / .parquet.tar.gz / .parquet.zip
Parameters:
-
path
¤Path | str
) –Path to the file on disk to add to the dataset.
-
timestamp_column
¤str
) –Column within the file containing timestamp information. NOTE: this is omitted as a channel from the data added to Nominal, and is instead used to set the timestamps for all other uploaded data channels.
-
timestamp_type
¤_AnyTimestampType
) –Type of timestamp data contained within the
timestamp_column
e.g. 'epoch_seconds'. -
tag_columns
¤Mapping[str, str] | None
, default:None
) –a dictionary mapping tag keys to column names.
-
tags
¤Mapping[str, str] | None
, default:None
) –key-value pairs to apply as tags to all data uniformly in the file
archive
¤
archive() -> None
Archive this dataset. Archived datasets are not deleted, but are hidden from the UI.
get_channels
¤
Look up the metadata for all matching channels associated with this datasource
names: List of channel names to look up metadata for.
Yields a sequence of channel metadata objects which match the provided query parameters
get_dataset_file
¤
get_dataset_file(dataset_file_id: str) -> DatasetFile
Retrieve the given dataset file by ID
Parameters:
Returns:
-
DatasetFile
–Metadata for the requested dataset file
Raises:
-
FileNotFoundError
–Details about the requested file could not be found
get_log_stream
¤
get_log_stream(
batch_size: int = 50000, max_wait: timedelta = timedelta(seconds=1)
) -> LogStream
Stream to asynchronously write log data to a dataset.
Parameters:
-
batch_size
¤int
, default:50000
) –Number of records to upload at a time to Nominal. NOTE: Raising this may improve performance in high latency scenarios
-
max_wait
¤timedelta
, default:timedelta(seconds=1)
) –Maximum number of seconds to allow data to be locally buffered before streaming to Nominal.
Returns:
-
LogStream
–Write stream object configured to send logs to nominal. This may be used as a context manager
-
LogStream
–(so that resources are automatically released upon exiting the context), or if not used as a context
-
LogStream
–manager, should be explicitly
close()
-ed once no longer needed.
get_write_stream
¤
get_write_stream(
batch_size: int = 50000,
max_wait: timedelta = timedelta(seconds=1),
data_format: Literal["json", "protobuf", "experimental"] = "json",
) -> DataStream
Stream to write timeseries data to a datasource.
Data is written asynchronously.
batch_size: How big the batch can get before writing to Nominal.
max_wait: How long a batch can exist before being flushed to Nominal.
data_format: Serialized data format to use during upload.
NOTE: selecting 'protobuf' or 'experimental' requires that `nominal` was installed
with `protos` extras.
Write stream object configured to send data to nominal. This may be used as a context manager
(so that resources are automatically released upon exiting the context), or if not used as a context
manager, should be explicitly `close()`-ed once no longer needed.
list_files
¤
list_files(*, successful_only: bool = True) -> Iterable[DatasetFile]
List files ingested to this dataset.
If successful_only, yields files with a 'success' ingest status only.
poll_until_ingestion_completed
¤
Block until dataset file ingestion has completed. This method polls Nominal for ingest status after uploading a file to a dataset on an interval.
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
search_channels
¤
search_channels(
exact_match: Sequence[str] = (),
fuzzy_search_text: str = "",
*,
data_types: Sequence[ChannelDataType] | None = None
) -> Iterable[Channel]
Look up channels associated with a datasource.
Parameters:
-
exact_match
¤Sequence[str]
, default:()
) –Filter the returned channels to those whose names match all provided strings (case insensitive).
-
fuzzy_search_text
¤str
, default:''
) –Filters the returned channels to those whose names fuzzily match the provided string.
-
data_types
¤Sequence[ChannelDataType] | None
, default:None
) –Filter the returned channels to those that match any of the provided types
Yields:
set_channel_prefix_tree
¤
set_channel_prefix_tree(delimiter: str = '.') -> None
Index channels hierarchically by a given delimiter.
Primarily, the result of this operation is to prompt the frontend to represent channels in a tree-like manner that allows folding channels by common roots.
set_channel_units
¤
set_channel_units(
channels_to_units: UnitMapping,
validate_schema: bool = False,
allow_display_only_units: bool = False,
) -> None
Set units for channels based on a provided mapping of channel names to units.
channels_to_units: A mapping of channel names to unit symbols.
NOTE: any existing units may be cleared from a channel by providing None as a symbol.
validate_schema: If true, raises a ValueError if non-existent channel names are provided in
`channels_to_units`. Default is False.
allow_display_only_units: If true, allow units that would be treated as display-only by Nominal.
ValueError: Unsupported unit symbol provided
conjure_python_client.ConjureHTTPError: Error completing requests.
unarchive
¤
unarchive() -> None
Unarchives this dataset, allowing it to show up in the 'All Datasets' pane in the UI.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace dataset metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in dataset.labels:
new_labels.append(old_label)
dataset = dataset.update(labels=new_labels)
write_logs
¤
write_logs(
logs: Iterable[LogPoint],
channel_name: str = "logs",
batch_size: int = 1000,
) -> None
Stream logs to the datasource.
This method executes synchronously, i.e. it blocks until all logs are sent to the API. Logs are sent in batches. The logs can be any iterable of LogPoints, including a generator.
Parameters:
-
logs
¤Iterable[LogPoint]
) –LogPoints to stream to Nominal.
-
channel_name
¤str
, default:'logs'
) –Name of the channel to stream logs to.
-
batch_size
¤int
, default:1000
) –Number of logs to send to the API at a time.
Example
from nominal.core import LogPoint
def parse_logs_from_file(file_path: str) -> Iterable[LogPoint]:
# 2025-04-08T14:26:28.679052Z [INFO] Sent ACTUATE_MOTOR command
with open(file_path, "r") as f:
for line in f:
timestamp, message = line.removesuffix("\n").split(maxsplit=1)
yield LogPoint.create(timestamp, message)
dataset = client.get_dataset("dataset_rid")
logs = parse_logs_from_file("logs.txt")
dataset.write_logs(logs)
DatasetFile
dataclass
¤
DatasetFile(
id: str,
dataset_rid: str,
name: str,
bounds: Bounds | None,
uploaded_at: IntegralNanosecondsUTC,
ingested_at: IntegralNanosecondsUTC | None,
deleted_at: IntegralNanosecondsUTC | None,
ingest_status: IngestStatus,
timestamp_channel: str | None,
timestamp_type: TypedTimestampType | None,
file_tags: Mapping[str, str] | None,
tag_columns: Mapping[str, str] | None,
_clients: _Clients,
)
delete
¤
delete() -> None
Deletes the dataset file, removing its data permanently from Nominal.
NOTE: this cannot be undone outside of fully re-ingesting the file into Nominal.
download
¤
download(output_directory: Path, force: bool = False) -> Path
Download the dataset file to a destination on local disk.
Parameters:
-
output_directory
¤Path
) –Download file to the given directory
-
force
¤bool
, default:False
) –If true, delete any files that exist / create parent directories if nonexistent
Returns:
-
Path
–Path that the file was downloaded to
Raises:
-
FileNotFoundError
–Output directory doesn't exist and force=False
-
FileExistsError
–File already exists at destination
-
NotADirectoryError
–Output directory exists and is not a directory
-
RuntimeError
–Error downloading file
download_original_files
¤
download_original_files(
output_directory: Path,
force: bool = True,
parallel_downloads: int = 8,
num_retries: int = 3,
) -> Sequence[Path]
Download the input file(s) for a containerized extractor to a destination on local disk.
Parameters:
-
output_directory
¤Path
) –Download file(s) to the given directory
-
force
¤bool
, default:True
) –If true, delete any files that exist / create parent directories if nonexistent
-
parallel_downloads
¤int
, default:8
) –Number of files to download concurrently
-
num_retries
¤int
, default:3
) –Number of retries to perform per file download if any exception occurs
Returns:
Raises:
-
NotADirectoryError
–Output directory is not a directory
NOTE: any file that fails to download will result in an error log and will not be returned
poll_until_ingestion_completed
¤
Block until dataset file ingestion has completed
This method polls Nominal for ingest status after uploading a file to a dataset on an interval.
DockerImageSource
dataclass
¤
DockerImageSource(
registry: str,
repository: str,
tag_details: TagDetails,
authentication: UserPassAuth | None,
command: str | None,
)
Event
dataclass
¤
Event(
rid: str,
asset_rids: Sequence[str],
name: str,
description: str,
start: IntegralNanosecondsUTC,
duration: IntegralNanosecondsDuration,
properties: Mapping[str, str],
labels: Sequence[str],
type: EventType,
_uuid: str,
created_by_rid: str | None,
_clients: _Clients,
)
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
assets: Iterable[Asset | str] | None = None,
start: datetime | IntegralNanosecondsUTC | None = None,
duration: timedelta | IntegralNanosecondsDuration | None = None,
properties: Mapping[str, str] | None = None,
labels: Iterable[str] | None = None,
type: EventType | None
) -> Self
Replace event metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in event.labels:
new_labels.append(old_label)
event = event.update(labels=new_labels)
FileExtractionInput
dataclass
¤
FileExtractionInput(
name: str,
description: str | None,
environment_variable: str,
file_suffixes: Sequence[str],
required: bool,
)
Configuration for a file extraction input in a containerized extractor.
Parameters:
-
name
¤str
) –Human-readable name for this input configuration.
-
description
¤str | None
) –Optional detailed description of what this input represents.
-
environment_variable
¤str
) –Environment variable name that will be set in the container to specify the input file path.
-
file_suffixes
¤Sequence[str]
) –List of file extensions that this input accepts (e.g., ['.csv', '.txt']).
-
required
¤bool
) –Whether this input is mandatory for the extractor to run. Defaults to False.
FileType
¤
Bases: NamedTuple
from_path
classmethod
¤
FileTypes
¤
BINARY
class-attribute
instance-attribute
¤
DATAFLASH
class-attribute
instance-attribute
¤
JOURNAL_JSONL
class-attribute
instance-attribute
¤
JOURNAL_JSONL_GZ
class-attribute
instance-attribute
¤
MCAP
class-attribute
instance-attribute
¤
PARQUET
class-attribute
instance-attribute
¤
PARQUET_GZ
class-attribute
instance-attribute
¤
PARQUET_TAR
class-attribute
instance-attribute
¤
PARQUET_TAR_GZ
class-attribute
instance-attribute
¤
PARQUET_ZIP
class-attribute
instance-attribute
¤
LogPoint
dataclass
¤
LogPoint(
timestamp: IntegralNanosecondsUTC,
message: str,
args: Mapping[str, str],
)
LogPoint is a single, timestamped log entry.
LogPoints are added to a Dataset using Dataset.write_logs
.
NominalClient
dataclass
¤
NominalClient(_clients: ClientsBunch, _profile: str | None = None)
create
classmethod
¤
create(
base_url: str,
token: str | None,
trust_store_path: str | None = None,
connect_timeout: timedelta | float = DEFAULT_CONNECT_TIMEOUT,
*,
workspace_rid: str | None = None
) -> Self
Create a connection to the Nominal platform.
base_url: The URL of the Nominal API platform, e.g. "https://api.gov.nominal.io/api". token: An API token to authenticate with. If None, the token will be looked up in ~/.nominal.yml. trust_store_path: path to a trust store CA root file to initiate SSL connections. If not provided, certifi's trust store is used. connect_timeout: Timeout for any single request to the Nominal API. workspace_rid: The workspace RID to use for all API calls that require it. If not provided, the default workspace will be used (if one is configured for the tenant).
create_asset
¤
create_asset(
name: str,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Asset
Create an asset.
create_attachment
¤
create_attachment(
attachment_file: Path | str,
*,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Attachment
create_attachment_from_io
¤
create_attachment_from_io(
attachment: BinaryIO,
name: str,
file_type: tuple[str, str] | FileType = BINARY,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = ()
) -> Attachment
Upload an attachment. The attachment must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO. If the file is not in binary-mode, the requests library blocks indefinitely.
create_containerized_extractor
¤
create_containerized_extractor(
name: str,
*,
docker_image: DockerImageSource,
timestamp_column: str,
timestamp_type: _AnyTimestampType,
inputs: Sequence[FileExtractionInput] = (),
file_output_format: FileOutputFormat | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
description: str | None = None
) -> ContainerizedExtractor
create_dataset
¤
create_dataset(
name: str,
*,
description: str | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
prefix_tree_delimiter: str | None = None
) -> Dataset
Create an empty dataset.
Parameters:
-
name
¤str
) –Name of the dataset to create in Nominal.
-
description
¤str | None
, default:None
) –Human readable description of the dataset.
-
labels
¤Sequence[str]
, default:()
) –Text labels to apply to the created dataset
-
properties
¤Mapping[str, str] | None
, default:None
) –Key-value properties to apply to the cleated dataset
-
prefix_tree_delimiter
¤str | None
, default:None
) –If present, the delimiter to represent tiers when viewing channels hierarchically.
Returns:
-
Dataset
–Reference to the created dataset in Nominal.
create_empty_video
¤
create_empty_video(
name: str,
*,
description: str | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create an empty video to append video files to.
Parameters:
-
name
¤str
) –Name of the video to create in Nominal
-
description
¤str | None
, default:None
) –Description of the video to create in nominal
-
labels
¤Sequence[str]
, default:()
) –Labels to apply to the video in nominal
-
properties
¤Mapping[str, str] | None
, default:None
) –Properties to apply to the video in nominal
Returns:
-
Video
–Handle to the created video
create_event
¤
create_event(
name: str,
type: EventType,
start: datetime | IntegralNanosecondsUTC,
duration: timedelta | IntegralNanosecondsDuration = timedelta(),
*,
description: str | None = None,
assets: Iterable[Asset | str] = (),
properties: Mapping[str, str] | None = None,
labels: Iterable[str] = ()
) -> Event
create_run
¤
create_run(
name: str,
start: datetime | IntegralNanosecondsUTC,
end: datetime | IntegralNanosecondsUTC | None,
description: str | None = None,
*,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] = (),
attachments: Iterable[Attachment] | Iterable[str] = (),
asset: Asset | str | None = None
) -> Run
Create a run.
create_secret
¤
create_secret(
name: str,
decrypted_value: str,
description: str | None = None,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
) -> Secret
Create a secret for the current user
Parameters:
-
name
¤str
) –Name of the secret
-
decrypted_value
¤str
) –Plain text value of the secret
-
description
¤str | None
, default:None
) –Description of the secret
-
labels
¤Sequence[str]
, default:()
) –Labels for the secret
-
properties
¤Mapping[str, str] | None
, default:None
) –Properties for the secret
create_streaming_connection
¤
create_streaming_connection(
datasource_id: str,
connection_name: str,
datasource_description: str | None = None,
*,
required_tag_names: list[str] | None = None
) -> StreamingConnection
create_video_from_mcap
¤
create_video_from_mcap(
path: Path | str,
topic: str,
name: str | None = None,
description: str | None = None,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None
) -> Video
Create a video from an MCAP file containing H264 or H265 video data.
If name is None, the name of the file will be used.
See create_video_from_mcap_io
for more details.
create_video_from_mcap_io
¤
create_video_from_mcap_io(
mcap: BinaryIO,
topic: str,
name: str,
description: str | None = None,
file_type: tuple[str, str] | FileType = MCAP,
*,
labels: Sequence[str] = (),
properties: Mapping[str, str] | None = None,
file_name: str | None = None
) -> Video
Create video from topic in a mcap file.
Mcap must be a file-like object in binary mode, e.g. open(path, "rb") or io.BytesIO.
If name is None, the name of the file will be used.
create_workbook_from_template
¤
create_workbook_from_template(
template_rid: str,
run_rid: str,
title: str | None = None,
description: str | None = None,
is_draft: bool = False,
) -> Workbook
Creates a workbook from a workbook template.
NOTE: is_draft is intentionally unused and will be removed in a future release.
from_profile
classmethod
¤
from_profile(
profile: str,
*,
trust_store_path: str | None = None,
connect_timeout: timedelta | float = DEFAULT_CONNECT_TIMEOUT
) -> Self
Create a connection to the Nominal platform from a named profile in the Nominal config.
Parameters:
-
profile
¤str
) –profile name in the Nominal config.
-
trust_store_path
¤str | None
, default:None
) –path to a trust store certificate chain to initiate SSL connections. If not provided, certifi's trust store is used.
-
connect_timeout
¤timedelta | float
, default:DEFAULT_CONNECT_TIMEOUT
) –Request connection timeout.
from_token
classmethod
¤
from_token(
token: str,
base_url: str = DEFAULT_API_BASE_URL,
*,
workspace_rid: str | None = None,
trust_store_path: str | None = None,
connect_timeout: timedelta | float = DEFAULT_CONNECT_TIMEOUT,
_profile: str | None = None
) -> Self
Create a connection to the Nominal platform from a token.
Parameters:
-
token
¤str
) –Authentication token to use for the connection.
-
base_url
¤str
, default:DEFAULT_API_BASE_URL
) –The URL of the Nominal API platform.
-
workspace_rid
¤str | None
, default:None
) –The workspace RID to use for all API calls that require it. If not provided, the default workspace will be used (if one is configured for the tenant).
-
trust_store_path
¤str | None
, default:None
) –path to a trust store certificate chain to initiate SSL connections. If not provided, certifi's trust store is used.
-
connect_timeout
¤timedelta | float
, default:DEFAULT_CONNECT_TIMEOUT
) –Request connection timeout.
get_all_units
¤
Retrieve list of metadata for all supported units within Nominal
get_attachments
¤
get_attachments(rids: Iterable[str]) -> Sequence[Attachment]
Retrive attachments by their RIDs.
get_commensurable_units
¤
Get the list of units that are commensurable (convertible to/from) the given unit symbol.
get_datasets
¤
Retrieve datasets by their RIDs.
get_unit
¤
get_unit(unit_symbol: str) -> Unit | None
Get details of the given unit symbol, or none if the symbol is not recognized by Nominal.
Parameters:
-
unit_symbol
¤str
) –Symbol of the unit to get metadata for. NOTE: This currently requires that units are formatted as laid out in the latest UCUM standards (see https://ucum.org/ucum)
Resolved unit metadata if the symbol is valid and supported by Nominal, or None
if no such unit symbol matches.
get_user
¤
get_workbook_template
¤
get_workbook_template(rid: str) -> WorkbookTemplate
Gets the given workbook template by rid.
get_workspace
¤
get_workspace(workspace_rid: str | None = None) -> Workspace
Get workspace via given RID, or the default workspace if no RID is provided.
Parameters:
-
workspace_rid
¤str | None
, default:None
) –If provided, the RID of the workspace to retrieve. If None, retrieves the default workspace (deferring first to any workspace rid stored in the Nominal config, and attempting to fall back to the tenant-wide default workspace).
Returns:
-
Workspace
–Returns details about the requested workspace.
Raises:
-
NominalConfigError
–Raises a NominalConfigError if no workspace provided and there is no configured default workspace for the user.
-
ConjureHTTPError
–Requested workspace is unavailable to the user.
list_streaming_checklists
¤
list_workspaces
¤
Return all workspaces visible to the current user
search_assets
¤
search_assets(
search_text: str | None = None,
*,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
exact_substring: str | None = None,
workspace: WorkspaceSearchT | None = ALL
) -> Sequence[Asset]
Search for assets meeting the specified filters.
Filters are ANDed together, e.g. (asset.label == label) AND (asset.search_text =~ field)
Parameters:
-
search_text
¤str | None
, default:None
) –case-insensitive search for any of the keywords in all string fields
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a asset to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a asset to be included.
-
exact_substring
¤str | None
, default:None
) –case-insensitive search for exact string match in all string fields
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_checklists
¤
search_checklists(
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
author: User | str | None = None,
assignee: User | str | None = None,
workspace: WorkspaceSearchT | None = None,
) -> Sequence[Checklist]
Search for checklists meeting the specified filters.
Filters are ANDed together, e.g. (checklist.label == label) AND (checklist.search_text =~ field)
Parameters:
-
search_text
¤str | None
, default:None
) –case-insensitive search for any of the keywords in all string fields
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a checklist to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a checklist to be included.
-
author
¤User | str | None
, default:None
) –Author of checklists to search for
-
assignee
¤User | str | None
, default:None
) –Assignee of checklists to search for
-
workspace
¤WorkspaceSearchT | None
, default:None
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_containerized_extractors
¤
search_containerized_extractors(
*,
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
workspace: WorkspaceSearchT | None = ALL
) -> Sequence[ContainerizedExtractor]
Search for containerized extractors meeting the specified filters.
Filters are ANDed together, e.g., (extractor.label == label) AND (extractor.workspace == workspace)
Parameters:
-
search_text
¤str | None
, default:None
) –Fuzzy-searches for a string in the extractor's metadata.
-
labels
¤Sequence[str] | None
, default:None
) –A list of labels that must ALL be present on an extractor to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on an extractor te be included.
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
-
Sequence[ContainerizedExtractor]
–All extractors which match all of the provided coditions
search_data_reviews
¤
search_data_reviews(
assets: Sequence[Asset | str] | None = None,
runs: Sequence[Run | str] | None = None,
) -> Sequence[DataReview]
Search for any data reviews present within a collection of runs and assets.
search_datasets
¤
search_datasets(
*,
exact_match: str | None = None,
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
before: str | datetime | IntegralNanosecondsUTC | None = None,
after: str | datetime | IntegralNanosecondsUTC | None = None,
workspace_rid: Workspace | str | None = None,
workspace: WorkspaceSearchT = ALL,
archived: bool | None = None
) -> Sequence[Dataset]
Search for datasets the specified filters.
Filters are ANDed together, e.g. (secret.label == label) AND (secret.property == property)
Parameters:
-
exact_match
¤str | None
, default:None
) –Searches for an exact substring of dataset name
-
search_text
¤str | None
, default:None
) –Searches for a (case-insensitive) substring across all text fields.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a secret to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a secret to be included.
-
before
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Searches for datasets created before some time (inclusive).
-
after
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Searches for datasets created before after time (inclusive).
-
workspace_rid
¤Workspace | str | None
, default:None
) –deprecated. use
workspace
instead. -
workspace
¤WorkspaceSearchT
, default:ALL
) –Filters search to given workspace.
-
archived
¤bool | None
, default:None
) –Filters results to either archived or unarchived datasets.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_events
¤
search_events(
*,
search_text: str | None = None,
after: datetime | IntegralNanosecondsUTC | None = None,
before: datetime | IntegralNanosecondsUTC | None = None,
assets: Iterable[Asset | str] | None = None,
labels: Iterable[str] | None = None,
properties: Mapping[str, str] | None = None,
created_by: User | str | None = None,
workbook: Workbook | str | None = None,
data_review: DataReview | str | None = None,
assignee: User | str | None = None,
event_type: EventType | None = None,
workspace: WorkspaceSearchT | None = ALL
) -> Sequence[Event]
Search for events meeting the specified filters.
Filters are ANDed together, e.g. (event.label == label) AND (event.start > before)
Parameters:
-
search_text
¤str | None
, default:None
) –Searches for a string in the event's metadata.
-
after
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Filters to end times after this time, exclusive.
-
before
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Filters to start times before this time, exclusive.
-
assets
¤Iterable[Asset | str] | None
, default:None
) –List of assets that must ALL be present on an event to be included.
-
labels
¤Iterable[str] | None
, default:None
) –A list of labels that must ALL be present on an event to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on an event to be included.
-
created_by
¤User | str | None
, default:None
) –A User (or rid) of the author that must be present on an event to be included.
-
workbook
¤Workbook | str | None
, default:None
) –Workbook to search for events on
-
data_review
¤DataReview | str | None
, default:None
) –Search for events from the given data review
-
assignee
¤User | str | None
, default:None
) –Search for events with the given assignee
-
event_type
¤EventType | None
, default:None
) –Search for events based on level
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_runs
¤
search_runs(
start: str | datetime | IntegralNanosecondsUTC | None = None,
end: str | datetime | IntegralNanosecondsUTC | None = None,
name_substring: str | None = None,
*,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
exact_match: str | None = None,
search_text: str | None = None,
workspace: WorkspaceSearchT | None = ALL
) -> Sequence[Run]
Search for runs meeting the specified filters.
Filters are ANDed together, e.g. (run.label == label) AND (run.end <= end)
Parameters:
-
start
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive start time for filtering runs.
-
end
¤str | datetime | IntegralNanosecondsUTC | None
, default:None
) –Inclusive end time for filtering runs.
-
name_substring
¤str | None
, default:None
) –Searches for a (case-insensitive) substring in the name.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a run to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a run to be included.
-
exact_match
¤str | None
, default:None
) –A case-insensitive substring that must be matched exactly.
-
search_text
¤str | None
, default:None
) –A case-insensitive substring to perform fuzzy-search on all fields with
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_runs_by_asset
¤
search_secrets
¤
search_secrets(
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
workspace: WorkspaceSearchT | None = ALL,
) -> Sequence[Secret]
Search for secrets meeting the specified filters.
Filters are ANDed together, e.g. (secret.label == label) AND (secret.property == property)
Parameters:
-
search_text
¤str | None
, default:None
) –Searches for a (case-insensitive) substring across all text fields.
-
labels
¤Sequence[str] | None
, default:None
) –A sequence of labels that must ALL be present on a secret to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on a secret to be included.
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
search_users
¤
search_users(
exact_match: str | None = None, search_text: str | None = None
) -> Sequence[User]
Search for users meeting the specified filters. Filters are ANDed together, e.g., if exact_match and search_text are both provided, then both must match.
Parameters:
-
exact_match
¤str | None
, default:None
) –Searches for an exact substring across display name and email
-
search_text
¤str | None
, default:None
) –Searches for a (case-insensitive) substring across display name and email
Returns:
search_workbook_templates
¤
search_workbook_templates(
*,
exact_match: str | None = None,
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
created_by: User | str | None = None,
archived: bool | None = None,
published: bool | None = None
) -> Sequence[WorkbookTemplate]
Search for workbook templates meeting the specified filters.
Filters are ANDed together, e.g. (workbook.label == label) AND (workbook.author_rid == "rid")
Parameters:
-
exact_match
¤str | None
, default:None
) –Searches for a string to match exactly in the template's metadata
-
search_text
¤str | None
, default:None
) –Fuzzy-searches for a string in the template's metadata
-
labels
¤Sequence[str] | None
, default:None
) –A list of labels that must ALL be present on an workbook to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on an workbook to be included.
-
created_by
¤User | str | None
, default:None
) –Searches for workbook templates with the given creator's rid
-
archived
¤bool | None
, default:None
) –Searches for workbook templates that are archived if true
-
published
¤bool | None
, default:None
) –Searches f8or workbook templates that have been published if true
Returns:
-
Sequence[WorkbookTemplate]
–All workbook templates which match all of the provided conditions
search_workbooks
¤
search_workbooks(
*,
include_archived: bool = False,
exact_match: str | None = None,
search_text: str | None = None,
labels: Sequence[str] | None = None,
properties: Mapping[str, str] | None = None,
asset: Asset | str | None = None,
exact_assets: Sequence[Asset | str] | None = None,
created_by: User | str | None = None,
run: Run | str | None = None,
workspace: WorkspaceSearchT | None = ALL,
archived: bool | None = None
) -> Sequence[Workbook]
Search for workbooks meeting the specified filters.
Filters are ANDed together, e.g. (workbook.label == label) AND (workbook.created_by == "rid")
Parameters:
-
include_archived
¤bool
, default:False
) –If true, include archived workbooks in results
-
exact_match
¤str | None
, default:None
) –Searches for a string to match exactly in the workbook's metadata
-
search_text
¤str | None
, default:None
) –Fuzzy-searches for a string in the workbook's metadata
-
labels
¤Sequence[str] | None
, default:None
) –A list of labels that must ALL be present on an workbook to be included.
-
properties
¤Mapping[str, str] | None
, default:None
) –A mapping of key-value pairs that must ALL be present on an workbook to be included.
-
asset
¤Asset | str | None
, default:None
) –Searches for workbooks that include the given asset
-
exact_assets
¤Sequence[Asset | str] | None
, default:None
) –Searches for workbooks that have the exact given assets
-
created_by
¤User | str | None
, default:None
) –Searches for workbooks with the given author
-
run
¤Run | str | None
, default:None
) –Searches for workbooks with the given run
-
workspace
¤WorkspaceSearchT | None
, default:ALL
) –Filters search to given workspace.
-
archived
¤bool | None
, default:None
) –Return workbooks that are either archived or not
If WorkspaceSearchType.ALL is given for workspace
(default), searches within all workspaces the user can
access. If WorkspaceSearchType.DEFAULT, searches within the default workspace if configured, or raises a NominalConfigError if one is not configured. If a Workspace or a workspace rid is given, searches will be constrained to that workspace if the user has access to the workspace.
Returns:
Run
dataclass
¤
Run(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
start: IntegralNanosecondsUTC,
end: IntegralNanosecondsUTC | None,
run_number: int,
assets: Sequence[str],
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
add_attachments
¤
add_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Add attachments that have already been uploaded to this run.
attachments
can be Attachment
instances, or attachment RIDs.
add_connection
¤
add_connection(
ref_name: str,
connection: Connection | str,
*,
series_tags: Mapping[str, str] | None = None,
offset: timedelta | IntegralNanosecondsDuration | None = None
) -> None
Add a connection to this run.
Ref_name maps "ref name" (the name within the run) to a Connection (or connection rid). The same type of connection should use the same ref name across runs, since checklists and templates use ref names to reference connections.
Parameters:
-
ref_name
¤str
) –Logical name for the connection to add to the run
-
connection
¤Connection | str
) –Connection to add to the run
-
series_tags
¤Mapping[str, str] | None
, default:None
) –Key-value tags to pre-filter the connection with before adding to the run.
-
offset
¤timedelta | IntegralNanosecondsDuration | None
, default:None
) –Add the connection to the run with a pre-baked offset
add_dataset
¤
add_dataset(
ref_name: str,
dataset: Dataset | str,
*,
series_tags: Mapping[str, str] | None = None,
offset: timedelta | IntegralNanosecondsDuration | None = None
) -> None
Add a dataset to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
Parameters:
-
ref_name
¤str
) –Logical name for the data scope within the run
-
dataset
¤Dataset | str
) –Dataset to add to the run
-
series_tags
¤Mapping[str, str] | None
, default:None
) –Key-value tags to pre-filter the dataset with before adding to the run.
-
offset
¤timedelta | IntegralNanosecondsDuration | None
, default:None
) –Add the dataset to the run with a pre-baked offset
add_datasets
¤
add_datasets(
datasets: Mapping[str, Dataset | str],
*,
series_tags: Mapping[str, str] | None = None,
offset: timedelta | IntegralNanosecondsDuration | None = None
) -> None
Add multiple datasets to this run.
Datasets map "ref names" (their name within the run) to a Dataset (or dataset rid). The same type of datasets should use the same ref name across runs, since checklists and templates use ref names to reference datasets.
Parameters:
-
datasets
¤Mapping[str, Dataset | str]
) –Mapping of logical names to datasets to add to the run
-
series_tags
¤Mapping[str, str] | None
, default:None
) –Key-value tags to pre-filter the datasets with before adding to the run.
-
offset
¤timedelta | IntegralNanosecondsDuration | None
, default:None
) –Add the datasets to the run with a pre-baked offset
add_video
¤
Add a video to a run via video object or RID.
archive
¤
archive() -> None
Archive this run. Archived runs are not deleted, but are hidden from the UI.
NOTE: currently, it is not possible (yet) to unarchive a run once archived.
list_attachments
¤
list_attachments() -> Sequence[Attachment]
List a sequence of Attachments associated with this Run.
list_connections
¤
list_connections() -> Sequence[tuple[str, Connection]]
List the connections associated with this run. Returns (ref_name, connection) pairs for each connection
list_datasets
¤
List the datasets associated with this run. Returns (ref_name, dataset) pairs for each dataset.
list_videos
¤
List a sequence of refname, Video tuples associated with this Run.
remove_attachments
¤
remove_attachments(
attachments: Iterable[Attachment] | Iterable[str],
) -> None
Remove attachments from this run. Does not remove the attachments from Nominal.
attachments
can be Attachment
instances, or attachment RIDs.
remove_data_sources
¤
remove_data_sources(
*,
ref_names: Sequence[str] | None = None,
data_sources: (
Sequence[Connection | Dataset | Video | str] | None
) = None
) -> None
Remove data sources from this run.
The list data_sources can contain Connection, Dataset, Video instances, or rids as string.
update
¤
update(
*,
name: str | None = None,
start: datetime | IntegralNanosecondsUTC | None = None,
end: datetime | IntegralNanosecondsUTC | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None,
links: Sequence[str] | Sequence[Link] | None = None
) -> Self
Replace run metadata. Updates the current instance, and returns it. Only the metadata passed in will be replaced, the rest will remain untouched.
Links can be URLs or tuples of (URL, name).
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in run.labels:
new_labels.append(old_label)
run = run.update(labels=new_labels)
Secret
dataclass
¤
Secret(
rid: str,
name: str,
description: str,
properties: Mapping[str, str],
labels: Sequence[str],
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Update the secret in-place.
Parameters:
-
name
¤str | None
, default:None
) –New name of the secret
-
description
¤str | None
, default:None
) –New name of the secret
-
properties
¤Mapping[str, str] | None
, default:None
) –New properties for the secret
-
labels
¤Sequence[str] | None
, default:None
) –New labels for the secret
Returns:
-
Self
–Updated secret metadata.
TagDetails
dataclass
¤
TagDetails(tags: Sequence[str], default_tag: str)
Details about docker image tags to register for a custom extractor.
Parameters:
TimestampMetadata
dataclass
¤
TimestampMetadata(
series_name: str, timestamp_type: _ConjureTimestampType
)
Metadata about the timestamp output from the extractor.
Parameters:
Unit
dataclass
¤
Combination of the name and symbol of a unit within the supported systems of measurement.
This is primarily used when setting or retrieving the units of a channel within a dataset.
symbol
instance-attribute
¤
symbol: str
Abbreviated symbol for the unit (e.g. 'C') See: https://ucum.org/ucum
User
dataclass
¤
UserPassAuth
dataclass
¤
UserPassAuth(username: str, password_rid: str)
User/password based authentication credentials for pulling a docker image.
Parameters:
Video
dataclass
¤
Video(
rid: str,
name: str,
description: str | None,
properties: Mapping[str, str],
labels: Sequence[str],
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
add_mcap_to_video_from_io
class-attribute
instance-attribute
¤
add_mcap_to_video_from_io = add_mcap_from_io
add_file
¤
add_file(
path: Path | str,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
) -> VideoFile
Append to a video from a file-path to H264-encoded video data.
Parameters:
-
path
¤Path | str
) –Path to the video file to add to an existing video within Nominal
-
start
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Starting timestamp of the video file in absolute UTC time
-
frame_timestamps
¤Sequence[IntegralNanosecondsUTC] | None
, default:None
) –Per-frame absolute nanosecond timestamps. Most usecases should instead use the 'start' parameter, unless precise per-frame metadata is available and desired.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
Returns:
-
VideoFile
–Reference to the created video file.
add_from_io
¤
add_from_io(
video: BinaryIO,
name: str,
start: datetime | IntegralNanosecondsUTC | None = None,
frame_timestamps: Sequence[IntegralNanosecondsUTC] | None = None,
description: str | None = None,
file_type: tuple[str, str] | FileType = MP4,
) -> VideoFile
Append to a video from a file-like object containing video data encoded in H264 or H265.
Parameters:
-
video
¤BinaryIO
) –File-like object containing video data encoded in H264 or H265.
-
name
¤str
) –Name of the file to use when uploading to S3.
-
start
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Starting timestamp of the video file in absolute UTC time
-
frame_timestamps
¤Sequence[IntegralNanosecondsUTC] | None
, default:None
) –Per-frame absolute nanosecond timestamps. Most usecases should instead use the 'start' parameter, unless precise per-frame metadata is available and desired.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
-
file_type
¤tuple[str, str] | FileType
, default:MP4
) –Metadata about the type of video file, e.g., MP4 vs. MKV.
Returns:
-
VideoFile
–Reference to the created video file.
add_mcap
¤
Append to a video from a file-path to an MCAP file containing video data.
Parameters:
-
path
¤Path
) –Path to the video file to add to an existing video within Nominal
-
topic
¤str
) –Topic pointing to video data within the MCAP file.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
Returns:
-
VideoFile
–Reference to the created video file.
add_mcap_from_io
¤
add_mcap_from_io(
mcap: BinaryIO,
name: str,
topic: str,
description: str | None = None,
file_type: tuple[str, str] | FileType = MCAP,
) -> VideoFile
Append to a video from a file-like binary stream with MCAP data containing video data.
Parameters:
-
mcap
¤BinaryIO
) –File-like binary object containing MCAP data to upload.
-
name
¤str
) –Name of the file to create in S3 during upload
-
topic
¤str
) –Topic pointing to video data within the MCAP file.
-
description
¤str | None
, default:None
) –Description of the video file. NOTE: this is currently not displayed to users and may be removed in the future.
-
file_type
¤tuple[str, str] | FileType
, default:MCAP
) –Metadata about the type of video (e.g. MCAP).
Returns:
-
VideoFile
–Reference to the created video file.
archive
¤
archive() -> None
Archive this video. Archived videos are not deleted, but are hidden from the UI.
poll_until_ingestion_completed
¤
Block until video ingestion has completed. This method polls Nominal for ingest status after uploading a video on an interval.
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
unarchive
¤
unarchive() -> None
Unarchives this video, allowing it to show up in the 'All Videos' pane in the UI.
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace video metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
Note: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in video.labels:
new_labels.append(old_label)
video = video.update(labels=new_labels)
VideoFile
dataclass
¤
VideoFile(
rid: str,
name: str,
description: str | None,
created_at: IntegralNanosecondsUTC,
_clients: _Clients,
)
Bases: HasRid
archive
¤
archive() -> None
Archive the video file, disallowing it to appear when playing back the video
poll_until_ingestion_completed
¤
Block until video ingestion has completed. This method polls Nominal for ingest status after uploading a video file on an interval.
NominalIngestFailed: if the ingest failed
NominalIngestError: if the ingest status is not known
unarchive
¤
unarchive() -> None
Unarchive the video file, allowing it to appear when playing back the video
update
¤
update(
*,
name: str | None = None,
description: str | None = None,
starting_timestamp: datetime | IntegralNanosecondsUTC | None = None,
ending_timestamp: datetime | IntegralNanosecondsUTC | None = None,
true_frame_rate: float | None = None,
scale_factor: float | None = None
) -> Self
Update video file metadata.
Parameters:
-
name
¤str | None
, default:None
) –Name of the video file
-
description
¤str | None
, default:None
) –Description of the video file
-
starting_timestamp
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Starting timestamp for the video file
-
ending_timestamp
¤datetime | IntegralNanosecondsUTC | None
, default:None
) –Ending timestamp for the video file
-
true_frame_rate
¤float | None
, default:None
) –Frame rate that the video file was recorded at, irregardless of the frame rate that the media plays at.
-
scale_factor
¤float | None
, default:None
) –Ratio of absolute time to media time for the video file. For example, a value of 2 would indicate that for every second of media, two seconds have elapsed in absolute time.
Returns:
-
Self
–Updated video file metadata.
NOTE: only one of {ending_timestamp, true_frame_rate, scale_factor} may be present at one time.
Workbook
dataclass
¤
Workbook(
rid: str,
title: str,
description: str,
workbook_type: WorkbookType,
run_rids: Sequence[str] | None,
asset_rids: Sequence[str] | None,
_clients: _Clients,
)
Bases: HasRid
asset_rids
instance-attribute
¤
Mutually exclusive with run_rids
.
May be empty when a workbook is a fresh comparison workbook.
nominal_url
property
¤
nominal_url: str
Returns a link to the page for this Workbook in the Nominal app
run_rids
instance-attribute
¤
Mutually exclusive with asset_rids
.
May be empty when a workbook is a fresh comparison workbook.
archive
¤
archive() -> None
Archive this workbook. Archived workbooks are not deleted, but are hidden from the UI.
Note
Archiving is an idemponent operation-- calling archive() on a archived workbook will result in the workbook staying archived.
clone
¤
clone(title: str | None = None, description: str | None = None) -> Self
Create a new workbook copy from this workbook and return a reference to the cloned version.
Parameters:
-
title
¤str | None
, default:None
) –New title for the cloned workbook. Defaults to "Workbook clone from '
'" for the current workbook title. -
description
¤str | None
, default:None
) –New description for the cloned workbook. Defaults to the current description.
Returns:
-
Self
–Reference to the cloned workbook
lock
¤
lock() -> None
Locks the workbook, preventing changes from being made to it.
Note
Locking is an idemponent operation-- calling lock() on a locked workbook will result in the workbook staying locked.
unarchive
¤
unarchive() -> None
Unarchive this workbook, allowing it to be viewed in the UI.
Note
Unarchiving is an idemponent operation-- calling unarchive() on a unarchived workbook will result in the workbook staying unarchived.
unlock
¤
unlock() -> None
Unlocks the workbook, allowing changes to be made to it.
Note
Unlocking is an idemponent operation-- calling unlock() on an unlocked workbook will result in the workbook staying unlocked.
update
¤
update(
*,
title: str | None = None,
description: str | None = None,
properties: Mapping[str, str] | None = None,
labels: Sequence[str] | None = None
) -> Self
Replace workbook metadata. Updates the current instance, and returns it.
Only the metadata passed in will be replaced, the rest will remain untouched.
NOTE: This replaces the metadata rather than appending it. To append to labels or properties, merge them before calling this method. E.g.:
new_labels = ["new-label-a", "new-label-b"]
for old_label in workbook.labels:
new_labels.append(old_label)
workbook = workbook.update(labels=new_labels)
WorkbookType
¤
Workspace
dataclass
¤
WorkspaceSearchType
¤
WriteStream
dataclass
¤
WriteStream(
batch_size: int,
max_wait: timedelta,
_process_batch: Callable[[Sequence[BatchItem[StreamType]]], None],
_executor: ThreadPoolExecutor,
_thread_safe_batch: ThreadSafeBatch[StreamType],
_stop: Event,
_pending_jobs: BoundedSemaphore,
)
Bases: WriteStreamBase[StreamType]
close
¤
close(wait: bool = True) -> None
Close the Nominal Stream.
Stop the process timeout thread Flush any remaining batches
create
classmethod
¤
create(
batch_size: int,
max_wait: timedelta,
process_batch: Callable[[Sequence[BatchItem[StreamType]]], None],
) -> Self
Create the stream.
enqueue
¤
enqueue(
channel_name: str,
timestamp: str | datetime | IntegralNanosecondsUTC,
value: StreamType,
tags: Mapping[str, str] | None = None,
) -> None
Add a message to the queue after normalizing the timestamp to IntegralNanosecondsUTC.
The message is added to the thread-safe batch and flushed if the batch size is reached.
enqueue_batch
¤
enqueue_batch(
channel_name: str,
timestamps: Sequence[str | datetime | IntegralNanosecondsUTC],
values: Sequence[StreamType],
tags: Mapping[str, str] | None = None,
) -> None
Add a sequence of messages to the queue to upload to Nominal.
Messages are added one-by-one (with timestamp normalization) and flushed based on the batch conditions.
Parameters:
-
channel_name
¤str
) –Name of the channel to upload data for.
-
timestamps
¤Sequence[str | datetime | IntegralNanosecondsUTC]
) –Absolute timestamps of the data being uploaded.
-
values
¤Sequence[StreamType]
) –Values to write to the specified channel.
-
tags
¤Mapping[str, str] | None
, default:None
) –Key-value tags associated with the data being uploaded. NOTE: This must include all
required_tags
used when creating aConnection
to Nominal.
enqueue_from_dict
¤
enqueue_from_dict(
timestamp: str | datetime | IntegralNanosecondsUTC,
channel_values: Mapping[str, StreamType],
tags: Mapping[str, str] | None = None,
) -> None
Write multiple channel values at a given timestamp using a flattened dictionary.
Each key in the dictionary is treated as a channel name and the corresponding value is enqueued with the given timestamp.
Parameters:
-
timestamp
¤str | datetime | IntegralNanosecondsUTC
) –The shared timestamp to use for all items to enqueue.
-
channel_values
¤Mapping[str, StreamType]
) –A dictionary mapping channel names to their respective values.
-
tags
¤Mapping[str, str] | None
, default:None
) –Key-value tags associated with the data being uploaded. NOTE: This should include all
required_tags
used when creating aConnection
to Nominal.
flush
¤
Flush current batch of records to nominal in a background thread.
wait: If true, wait for the batch to complete uploading before returning
timeout: If wait is true, the time to wait for flush completion in seconds.
NOTE: If none, waits indefinitely.
poll_until_ingestion_completed
¤
poll_until_ingestion_completed(
datasets: Iterable[Dataset],
interval: timedelta = timedelta(seconds=1),
) -> None
Block until all dataset ingestions have completed (succeeded or failed).
This method polls Nominal for ingest status on each of the datasets on an interval. No specific ordering is guaranteed, but all datasets will be checked at least once.
NominalIngestMultiError: if any of the datasets failed to ingest