pepys_admin package

pepys_admin.cli module

pepys_admin.cli.main()[source]

Main function which parses the command line arguments, creates a DataStore object and calls run_admin_shell to open a shell.

pepys_admin.cli.run_shell(path, training=False, data_store=None, db=None, viewer=False)[source]

Runs the shell. Arguments allow specification of individual args from command-line (db, path, training) or specification of a data_store instead of a db name, to allow for advanced use in unit tests.

pepys_admin.admin_cli module

class pepys_admin.admin_cli.AdminShell(data_store, csv_path='/home/docs/checkouts/readthedocs.org/user_builds/pepys-import/checkouts/latest/pepys_admin')[source]

Bases: pepys_admin.base_cli.BaseShell

Main Shell of Pepys Admin.

choices = '(1) Initialise/Clear\n(2) Status\n(3) Export\n(4) Snapshot\n(5) Migrate\n(6) View Data\n(7) View Docs\n(8) Maintenance\n(9) Maintain tasks\n(10) View dashboard\n(.) Exit\n'
static do_exit()[source]

Exit the application

do_export()[source]

Runs the ExportShell which offers to export datafiles.

do_initialise()[source]

Runs the InitialiseShell which offers to clear contents, import sample data, create/delete schema.

do_maintenance_gui()[source]
do_migrate()[source]

Runs Alembic’s upgrade command to migrate the database to the latest version.

do_snapshot()[source]

Runs the SnapshotShell to take a snapshot of reference or/and metadata tables.

do_status()[source]

Prints table summaries and database version.

do_tasks_gui()[source]
do_view_dashboard()[source]
do_view_data()[source]

Runs the ViewDataShell which offers to view a table and run SQL.

do_view_docs()[source]
prompt = '(pepys-admin) '

pepys_admin.export_cli module

class pepys_admin.export_cli.ExportByPlatformNameShell(data_store, title, choices_full_text, choices_idx, objects)[source]

Bases: pepys_admin.base_cli.BaseShell

Offers to export datafiles by platform and sensor

default(line)[source]

Called on an input line when the command prefix is not recognized.

If this method is not overridden, it prints an error message and returns.

static do_cancel()[source]

Returns to the previous menu

do_export(option)[source]

Asks user for a file name, then calls DataStore.export_datafile to export Datafiles.

prompt = '(pepys-admin) (export by platform) '
class pepys_admin.export_cli.ExportShell(data_store)[source]

Bases: pepys_admin.base_cli.BaseShell

Offers to export datafiles by name, by platform and sensor

choices = '(1) Export by name\n(2) Export by Platform and sensor\n(.) Back\n'
static do_cancel()[source]

Returns to the previous menu

do_export()[source]

Exports datafiles by name.

do_export_all()[source]

Exports all datafiles.

do_export_by_platform_name()[source]

Exports datafiles by platform and sensor names. It asks user to select an existing Platform first. Then, it finds all datafile objects which include the selected Platform. Creates a dynamic intro (menu) from the found datafile objects, runs ExportByPlatformNameShell

prompt = '(pepys-admin) (export) '

pepys_admin.initialise_cli module

class pepys_admin.initialise_cli.InitialiseShell(data_store, parent_shell, csv_path)[source]

Bases: pepys_admin.base_cli.BaseShell

Offers users to clear contents, import sample reference data and metadata, create/clear schema.

choices = '(1) Clear database contents\n(2) Clear database schema\n(3) Create Pepys schema\n(4) Import Reference data\n(5) Import Metadata\n(.) Back\n'
static do_cancel()[source]

Returns to the previous menu

do_clear_db_contents()[source]

Truncates all tables in the database.

do_clear_db_schema()[source]

Deletes the schema from the database, i.e. removes all created tables

do_create_pepys_schema()[source]

Creates the tables and the schema.

do_import_metadata()[source]

Imports metadata from the given CSV files path.

do_import_reference_data()[source]

Imports reference data from the given CSV files path

prompt = '(initialise) '

pepys_admin.view_data_cli module

class pepys_admin.view_data_cli.ViewDataShell(data_store, viewer=False)[source]

Bases: pepys_admin.base_cli.BaseShell

Offers to view table and run SQL.

choices = '(1) View Table\n(2) Output Table to CSV\n(3) Run SQL\n(4) Output SQL Results to CSV\n(.) Back\n'
do_cancel()[source]

Returns to the previous menu

do_output_sql_to_csv()[source]

Executes the input. Exports the results of the query to a CSV file.

do_output_table_to_csv()[source]

Asks user to select a table name. Fetches all objects, and exports them to a CSV file.

do_run_sql()[source]

Executes the input. Prints the results of the query in table format.

do_view_table()[source]

Asks user to select a table name. Converts table name to class name, fetches the objects up to the number of MAX_ROWS_DISPLAYED, and prints them in table format.

generate_view_table_text(selected_table)[source]
pepys_admin.view_data_cli.bottom_toolbar()[source]

pepys_admin.snapshot_cli module

class pepys_admin.snapshot_cli.SnapshotShell(data_store)[source]

Bases: pepys_admin.base_cli.BaseShell

Offers to create snapshot with Reference data and create snapshot with reference data & metadata.

choices = '(1) Create snapshot with Reference data\n(2) Create snapshot with Reference data & Metadata\n(3) Create snapshot with all data\n(4) Create snapshot with all data, filtered by time\n(5) Create snapshot with all data, filtered by location\n(6) Create snapshot with all data, filtered by wargame\n(7) Create snapshot with all data, filtered by serial\n(8) Merge databases\n(.) Back\n'
static do_cancel()[source]

Returns to the previous menu

do_export_all_data()[source]
do_export_all_data_filter_location()[source]
do_export_all_data_filter_serial_participation()[source]
do_export_all_data_filter_time()[source]
do_export_all_data_filter_wargame_participation()[source]
do_export_reference_data()[source]

Exports reference data.

do_export_reference_data_and_metadata()[source]

Exports reference data and metadata.

do_merge_databases()[source]
prompt = '(pepys-admin) (snapshot) '

pepys_admin.snapshot_helpers module

pepys_admin.snapshot_helpers.export_all_measurement_tables(source_store, destination_store)[source]
pepys_admin.snapshot_helpers.export_measurement_table_with_filter(source_store, destination_store, table, filter=None)[source]
pepys_admin.snapshot_helpers.export_measurement_tables_filtered_by_location(source_store, destination_store, xmin, ymin, xmax, ymax)[source]
pepys_admin.snapshot_helpers.export_measurement_tables_filtered_by_serial_participation(source_store, destination_store, selected_serial)[source]
pepys_admin.snapshot_helpers.export_measurement_tables_filtered_by_time(source_store, destination_store, start_time, end_time)[source]
pepys_admin.snapshot_helpers.export_measurement_tables_filtered_by_wargame_participation(source_store, destination_store, selected_wargame)[source]
pepys_admin.snapshot_helpers.export_metadata_tables(source_store, destination_store, privacy_ids=None)[source]

Copies Platform, Sensor, Datafile and Synonym objects from source_store to destination_store.

Parameters
  • source_store (DataStore) – A DataStore object to fetch objects

  • destination_store (DataStore) – A DataStore object to copy the objects from source_store

  • privacy_ids (List) – A list of Privacy ID’s which is used to filter objects

Returns

pepys_admin.snapshot_helpers.export_reference_tables(source_store, destination_store, table_objects)[source]

Copies table objects from source_store to destination_store.

Parameters
  • source_store (DataStore) – A DataStore object to fetch objects

  • destination_store (DataStore) – A DataStore object to copy the objects from source_store

  • table_objects (List) – A list of table objects

Returns

pepys_admin.snapshot_helpers.find_sqlite_table_object(table_object, data_store)[source]

Finds and returns a SQLite Base class which will be used to create and insert values.

Parameters
  • table_object (sqlalchemy.ext.declarative.DeclarativeMeta) – A table object

  • data_store (DataStore) – A DataStore object

Returns

Returns a table object

Return type

sqlalchemy.ext.declarative.DeclarativeMeta

pepys_admin.snapshot_helpers.get_number_from_user(prompt_text)[source]
pepys_admin.snapshot_helpers.get_time_from_user(prompt_text)[source]
pepys_admin.snapshot_helpers.row_to_dict(table_object, data_store)[source]

Converts all entities of a table into a dict of {column_name: value}s.

Parameters
  • table_object (sqlalchemy.ext.declarative.DeclarativeMeta) – A table object

  • data_store (DataStore) – A DataStore object

Returns

Returns a dictionary with values

Return type

Dict

pepys_admin.merge module

class pepys_admin.merge.MergeDatabases(master_store, slave_store)[source]

Bases: object

add_changes(changes_to_add)[source]

Copies the Change entries with the specified ids in changes_to_add from the slave_store to the master_store.

add_logs(logs_to_add)[source]

Copies the Log entries with the specified ids in logs_to_add from the slave_store to the master_store.

merge_all_measurement_tables(added_datafile_ids)[source]

Copies across all entries in all measurement tables that have a source_id in the list of added_datafile_ids.

Must be run after reference and metadata tables have been merged.

merge_all_metadata_tables()[source]

Merge most metadata tables from the slave_store into the master_store, using the merge_change_id as the change_id for any modifications occuring as part of the merge.

Note: this does not merge the Datafile, Synonym, Log or Change tables - these are handled separately.

merge_all_reference_tables()[source]

Merges all reference tables from the slave_store into the master_store.

merge_all_tables()[source]

Does a full merge, taking all data from the slave_store database and merging it into the master_store database. At the end of merging, print some summary tables with merge statistics and lists of new objects added.

Both master_store and data_store can be connected to either Postgres or SQLite databases.

The overall outline of the merge is that we first merge the reference tables and most of the metadata tables, then merge the Synonyms and Datafiles tables, then copy across the relevant entries from the measurement tables, before finally filtering and copying the Logs and Changes tables.

In general, “merging” here means comparing the two databases and dealing with these specific situations:

a) The exact same entry exists in both databases, with the same GUID. This will occur when this entry was exported from the master database to the slave database. This is counted as ‘already present’, and nothing is done to change it.

b) An entry exists in the slave database which isn’t present in the master database This will occur when the entry is added to the slave database after the two databases have been separated This entry will be added to the master database, and counted as an item ‘added’.

c) An entry exists in the slave database with the same details as in the master database, but with a different GUID. The details that are compared to see if the entry is the same are the name field for a reference table, the fields defined in a unique constraint for any other table (eg. name and host for Sensors) or all fields if a unique constraint isn’t defined. If optional fields are present

in the slave database that are not present in the entry in the master database, then these are copied across. If the privacy value in the slave database is higher (ie. more secure) than the entry in the master database, then the privacy is updated to match the slave value. This entry counts as a ‘modified’ entry, as the GUID in the slave database is modified to match the GUID in the master, so that other objects can be copied without failing foreign key integrity checks. This does NOT mean that the data for the entry has been modified - this is only the case if the data_changed field is set to True in the resulting list of ids.

The measurement tables (States, Contacts etc) aren’t merged as such: new entries are just copied across. This is done based on the entries in the Datafiles table: we keep track of the IDs of all datafile entries that are added to the master from the slave, and then just copy the measurement data that has come from those datafiles.

Logs and Changes are merged at the end, so that we can check each Log entry to see if it refers to something that actually exists in the master database, and only copy it if it does.

The results from the merge_all_* functions consist of:

  • A list of ‘added names’: a dictionary with keys for each table merged, and values of a list of all

the new ‘names’ that have been added (the ‘name’ is either the name field, if it exists, or the ‘reference’ or ‘synonym’ field if it doesn’t.)

  • A list of statistics: a dictionary with keys for each table merged, and values of a dictionary

with counts of items ‘already there’, ‘modified’ and ‘added’.

merge_extractions(added_datafile_ids)[source]
merge_logs_and_changes()[source]

Merges the Logs and Changes tables from the slave_store into the master_store.

Must be run after all other merging is complete.

merge_measurement_table(table_object_name, added_datafile_ids)[source]

Merge the specified metadata table (specified as the object name, so singular) from the slave_store into the master_store, copying across entries which have a source_id in added_datafile_ids.

merge_metadata_table(table_object_name)[source]

Merge the specified metadata table (table_object_name should be the singular name for the table, such as “Platform”) from the slave_store into the master_store. Use the given change_id for any modifications that occur because of the merge (these modifications would happen if an optional value is set on the slave but not on the master, and it is therefore copied across).

merge_reference_table(table_object_name)[source]

Merges a reference table (table_object_name should be the singular name of the table object, such as PlatformType) from the slave_store into the master_store.

prepare_merge_logs()[source]

Works out which Log and Change entries need copying from the slave_store to the master_store, by checking which entries refer to something that is actually in the master database.

Must be run after all other merging is complete.

static rows_to_list_of_dicts(results)[source]

Converts a list of rows returned from a SQLAlchemy query into a list of dicts.

The obvious way to do this would be to look in the __table__ attribute of the row and get the column names, and then extract those values from the row. However, this will not work for the Measurement tables, as the attributes of the class have different names to the column names. For example, the column name is “speed” but the attribute name is “_speed” and a property (with getter and setter methods) is used to convert between the two. The bulk_insert_mappings method doesn’t use the Table object, so doesn’t use the properties to do the conversion.

Therefore, we need to have all of the attributes of this class, including the attributes starting with _. However, we don’t want the SQLAlchemy internal attributes, or the ‘dunder’ methods that start with a __. Therefore, this function excludes those, but keeps all others.

We also need to process the location field to make sure it is in WKT format so the database can understand it.

Note: This will currently fail for any table with a generic geometry field in it (ie. the geometry1 table), but this is not used currently.

static split_list(lst, n=100)[source]

Yield successive n-sized chunks from lst.

update_logs_table(modified_ids)[source]

Updates the Logs table in the slave_store for entries which have had their GUID modified when they were merged with the master_store.

This occurs in the situation where there are entries in both master_store and slave_store with the same details, and therefore the slave GUID for that entry is updated to match the master GUID. A list of information about those entries is passed to this function, and their original IDs (‘from_id’) are searched in the Logs table and updated (to ‘to_id’) if found.

update_master_from_slave_entry(master_entry, slave_entry)[source]

Updates the entry in master with any fields that are set on slave but not on master (ie. optional fields like trigraph that may be left blank), or if the slave privacy is higher (ie. more secure) than in the entry on master.

Returns True if the entry has been modified.

update_synonyms_table(modified_ids)[source]

Updates the Synonyms table in the slave_store for entries which have had their GUID modified when they were merged with the master_store.

This occurs in the situation where there are entries in both master_store and slave_store with the same details, and therefore the slave GUID for that entry is updated to match the master GUID. A list of information about those entries is passed to this function, and their original IDs (‘from_id’) are searched in the Synonyms table and updated (to ‘to_id’) if found.

pepys_admin.utils module

class pepys_admin.utils.StdoutAndFileWriter(filename)[source]

Bases: object

close()[source]
flush()[source]
isatty()[source]
write(message)[source]
pepys_admin.utils.check_sqlalchemy_results_are_equal(results1, results2)[source]

Compare two lists of SQLAlchemy results to see if they are equal

pepys_admin.utils.create_statistics_from_ids(ids)[source]

Create a statistics dictionary from a dict of ids/details for items added, modified and already_there

pepys_admin.utils.database_at_latest_revision(db_path)[source]
pepys_admin.utils.get_default_export_folder()[source]
pepys_admin.utils.get_name_for_obj(obj)[source]

Return a ‘name’ field for an object. Most objects have a field called name, so we try this first. If this fails, we try reference (for Datafiles) and synonym (for Synonyms), otherwise we just return ‘Unknown’.

pepys_admin.utils.make_query_for_all_data_columns(table_object, comparison_object, session)[source]

Makes a query to search for an object where all data columns match the comparison object

In this case, the data columns are all columns excluding the primary key and the created_date column.

pepys_admin.utils.make_query_for_cols(table_object, comparison_object, columns, session)[source]

Create a SQLAlchemy query for the given table_object, with a filter comparing the given columns with the given comparison_object.

For example, if the comparison object contains values {‘name’: GPS, ‘host’: 42, ‘type’:12, ‘blah’: ‘hello}, and the columns are [‘name’, ‘host’] then this will return a query like this:

session.query(table_object).filter(table_object.name == “GPS”).filter(table_object.host == 42)

pepys_admin.utils.make_query_for_unique_cols_or_all(table_object, comparison_object, session)[source]

Create a SQLAlchemy query object for the given table_object, with a filter comparing it to the comparison object. The filter will use just the columns defined in the unique constraint for the table if a unique constraint is defined, otherwise it will compare all columns.

pepys_admin.utils.print_names_added(names)[source]

Print the list of names of items added in a sensible format

pepys_admin.utils.read_latest_revisions_file()[source]
pepys_admin.utils.redirect_stdout_to_file_and_screen(filename)[source]
pepys_admin.utils.round_object_if_necessary(obj)[source]
pepys_admin.utils.sqlalchemy_obj_to_dict(obj, remove_id=False)[source]

Converts a SQLAlchemy result from a query into a dict of {column_name: value}s, excluding the ‘created_date’ column.

This is used for tests. To make the tests work on machines that round floats differently, we round the objects if necessary before putting them in the dict. This deals with issues we have if we have a Quantity with a value of 5.0000000024 from Postgres and a value of 5.0000 on SQLite.

pepys_admin.utils.statistics_to_table_data(statistics)[source]

Convert a dictionary of statistics data into tuples ready for displaying as a table with the tabulate function.