def translate_mysql_to_python(data_tuple): """So, it seems that I also need to take care in converting whatever I read from the MySQL databases into Python-speak too. Interestingly enough, I don't need to worry about the inverse apparently: the mysql.connector already deals with it. Given that this module is a python to mysql interface of sorts, it seems only logic that it also concerns itself with the different nomenclatures used between the two platforms. I've been passing Nones, Trues and Falses in SQL strings to be executed, by this adapter, in the database side and so far none of these issues have popped up. Also, when checking the data that was inserted in the database through this statements, I can verify that the Nones were properly replaced by NULL and so on. This method does the translation between MySQL-esque to Python-speak. Given that results from the MySQL databases are obtained through operations with the mysql-python connector, it means that these results are returned in a tuple. Python doesn't allow tuple data to be edited, only list data. Fortunately the transition between data types is trivial @:param data_tuple (tuple) - A tuple with as many results as the columns in the returned record @:return translated_data_tuple(tuple) - The input tuple with all the offending parameters replaced by Python-speak equivalents (NULL -> None, true -> True and false -> False) @:raise InputValidationException - if errors appear during the validation of inputs""" trans_mysql_log = ambi_logger.get_logger(__name__) try: validate_input_type(data_tuple, tuple) except InputValidationException as ive: trans_mysql_log.error(ive.message) raise ive # Start by switching to an editable list data_list = list(data_tuple) for i in range(0, len(data_list)): # Standardize the comparisons by casting the element to a string and then switch to lower case. This should take of the cases where a 'NULL' comes as a 'null', 'Null' or any other upper-lower case combination. if str(data_list[i]).lower() == 'null': data_list[i] = None elif str(data_list[i]).lower() == 'true': data_list[i] = True elif str(data_list[i]).lower() == 'false': data_list[i] = False # Cast the list back to a tuple when returning it return tuple(data_list)
def run_sql_statement(cursor, sql_statement, data_tuple=()): """The way python runs SQL statements is a bit convoluted, with plenty of moving parts and things that can go wrong. Since I'm going to run plenty of these along this project, it is a good idea to abstract this operation as much as possible @:param cursor (psycopg2.extensions.cursor) - A cursor object, obtained from an active database connection, that its used by python to run SQL statements as well as to process the results. @:param sql_statement (str) - THe SQL statement string to be executed, with its values not explicit but replaced by '%s' characters instead. This method takes care of this replacements. @:param data_tuple (tuple) - A tuple with as many elements as the ones to be replaced in the SQL string. The command that effectively runs the SQL statement takes two arguments: the original SQL string statement with '%s' elements instead of its values and a data tuple where those values are indicated in the expected order. The command then sends both elements across to be executed database side in a way that protects their content and integrity (supposedly, it wards against SQL injection attacks. @:raise utils.InputValidationException - If the input arguments fail their validations @:raise PostgresDatabaseException - For errors related to database operations @:raise Exception - For any other error that may occur. """ run_sql_log = ambi_logger.get_logger(__name__) utils.validate_input_type(sql_statement, str) utils.validate_input_type(data_tuple, tuple) utils.validate_input_type(cursor, psycopg2.extensions.cursor) # Count the number of '%s' in the sql statement string and see if they match with the number of elements in the tuple if len(data_tuple) != sql_statement.count('%s'): error_msg = "Mismatch between the number of data tuple elements ({0}) and the number of replaceable '%s' in the sql statement string ({1})!".format( str(len(data_tuple)), str(sql_statement.count('%s'))) run_sql_log.error(error_msg) raise PostgresDatabaseException(message=error_msg) # Done with the validations. try: cursor.execute(sql_statement, data_tuple) except psycopg2.Error as err: run_sql_log.error(err.pgerror) raise PostgresDatabaseException(message=err.pgerror, error_code=err.pgcode, diag=err.diag) return cursor
def convert_timestamp_tb_to_datetime(timestamp): """This method converts a specific timestamp from a ThingsBoard remote API request (which has one of the weirdest formats that I've seen around) and returns a datetime object that can be interpreted by the DATETIME data format, which way more human readable that the POSIX timestamp that is being used in the ThingsBoard Postgres database. databases, i.e., YYYY-MM-DD hh:mm:ss, which also corresponds to the native datetime.datetime format from python @:param timestamp (int) - This is one of the trickiest elements that I've found so far. The ThingsBoard internal data is stored in a Postgres database. I'm assuming that is the one behind the data format returned by the remote API. Whatever it may be, it returns a 13 digit integer as the timestamp. A quick analysis suggests that this is a regular POSIX timestamp, i.e., the number of seconds from 1970-01-01 00:00:00 until whenever that data was inserted in the database. There are literally loads of different and straightforward ways to convert this value into a human-readable datetime. Yet none of them seemed to work with this particular value. In fact, none of the timestamps returned from the remote API was able to be converted into a datetime. And the reason is stupid as hell! It seems that, if you bother to count all seconds from 1970 until today, you get a number with 10 digits... and you have been getting that for quite some time given how long has to pass to add a new digit to this value. A bit more of investigation showed that, as well with regular datetime elements, POSIX timestamps also indicate the number of microseconds elapsed, but normally that is expressed as a 17 digit float in which the last 5 are the decimal part, i.e., the microseconds, but there's an obvious decimal point w«in those cases where the POSIX timestamp also has the number of microseconds. The only reasonable explanation (though somewhat weird in its own way) is that the value returned by the remote API contains 3 decimal digits and, for whatever reason behind it, the decimal point is omitted. It turns out that this is exactly what is going on! So I need to do extra flexing with this one... The method expects the 13 digit integer that comes straight from the remote API call and then itself does whatever needs to return a meaningful datetime @:return data_datetime (datetime.datetime) - A regular datetime object that can be sent directly to a MySQL database expecting a DATETIME field (YYYY-MM-DD hh:mm:ss) @:raise utils.InputValidationException - If there is something wrong with the validation of inputs """ times2date_log = ambi_logger.get_logger(__name__) utils.validate_input_type(timestamp, int) # Given how weird are the datetime values returned by the ThingsBoard API, I'm going to extra anal with this one if len(str(timestamp)) != 13: error_msg = "Please provide the full value for the timestamp returned by the remote API (expecting a 13 digit int, got {0} digits.)".format( str(len(str(timestamp)))) times2date_log.error(error_msg) raise Exception(error_msg) # All appears to be in good order so far. From here I could simply divide the timestamp value by 1000 to get it to 10 integer digits (+3 decimal) but I'm not particularly concerned about microseconds, really. So, might as well drop the # last 3 digits of the timestamp and call it a day (forcing a int cast after dividing the timestamp by 1000 effectively truncates the integer part of it, thus achieving the desired outcome) timestamp = int(timestamp / 1000) # The rest is trivial return datetime.datetime.fromtimestamp(timestamp)
def checkUpdates(): """ GET method to retrieve any available updates to the system @:type user_types allowed for this service: SYS_ADMIN @:return { "message": "string", "updateAvailable": true } """ check_updates_log = ambi_logger.get_logger(__name__) service_endpoint = "/api/admin/updates" service_dict = utils.build_service_calling_info( mac.get_auth_token(user_type='sys_admin'), service_endpoint) try: response = requests.get(url=service_dict["url"], headers=service_dict["headers"]) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format( str(service_dict['url'])) check_updates_log.error(error_msg) raise ce return response
def retrieve_asset_name(asset_id): """ This method mirrors the last one in the sense that it receives an asset_id and returns the associated name, if any. :param asset_id: (str) A 32-byte hexadecimal string in the expected xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx format. This element is configured in the database as the primary key, which means that this method should never receive multiple records. :raise utils.InputValidationException: If the the input argument fails the initial data type validation :raise mysql_utils.MySQLDatabaseException: If any errors occur when consulting the database. :return asset_name: (str) If an unique match was found, None otherwise """ log = ambi_logger.get_logger(__name__) # Validate the input utils.validate_id(entity_id=asset_id) # Prepare the database access elements database_name = user_config.access_info['mysql_database']['database'] table_name = proj_config.mysql_db_tables['tenant_assets'] cnx = mysql_utils.connect_db(database_name=database_name) select_cursor = cnx.cursor(buffered=True) sql_select = """SELECT entityType, name FROM """ + str( table_name) + """ WHERE id = %s;""" select_cursor = mysql_utils.run_sql_statement(cursor=select_cursor, sql_statement=sql_select, data_tuple=(asset_id, )) # Analyse the execution if select_cursor.rowcount is 0: log.warning( "No records returned from {0}.{1} using asset_id = {2}".format( str(database_name), str(table_name), str(asset_id))) select_cursor.close() cnx.close() return None elif select_cursor.rowcount > 1: log.warning( "{0}.{1} returned {2} records using asset_id = {3}. Cannot continue..." .format(str(database_name), str(table_name), str(select_cursor.rowcount), str(asset_id))) select_cursor.close() cnx.close() return None else: # A single return came back. Process it then record = select_cursor.fetchone() if record[0] != 'ASSET': error_msg = "The record returned from {0}.{1} using asset id = {2} has a wrong entityType. Got a {3}, expected an 'ASSET'".format( str(database_name), str(table_name), str(select_cursor.rowcount), str(record[0])) log.error(msg=error_msg) select_cursor.close() cnx.close() return None else: # All is good so far. Check if the name returned is indeed a str and return it if so asset_name = record[1] utils.validate_input_type(asset_name, str) select_cursor.close() cnx.close() return asset_name
def handleOneWayDeviceRPCRequests(deviceId, remote_method, param_dict=None): """POST method to place an one way RPC call to a device identified by the 'deviceId' parameter provided as argument. The RPC command needs to have a valid target in the device (client) side to have any effect. This command should be specified in the 'remote_method argument' and any expected arguments required for its execution should be provided in the 'param_dict' argument This method is equivalent to a 'fire-and-forget' routine: no replies are expected nor are going to be returned either from the client side (this service doesn't wait for a response msg in the respective topic. This means that this method should be use for ACTUATE only. To get any information from the device side use the two way version of this method since all device-side communications are to be initiated from "this" side (the ThingsBoard/application side) and thus a response is expected following a request @:param deviceId (str) - The typical 32-byte, dash separated hexadecimal string that uniquely identifies the device in the intended ThingsBoard installation @:param remote_method (str) - The name of the method that is defined client-side that is to be executed with this call @:param param_dict (dict) - If the remote method requires arguments to be executed, use this dictionary argument to provide them @:return http_status_code (int) - This method only returns the HTTP status code regarding the execution of the HTTP request and nothing more Usage example: Suppose that there's a device configured in the Thingsboard installation that abstracts a lighting switch that can be turned ON or OFF. The device (Raspberry Pi) that controls that switch has a continuously running method (via a while True for instance) that listens and reacts to messages received from the ThingsBoard server instance. One of these control methods is called "setLightSwitch" and expects a boolean action "value" passed in the message strucure (True or False) regarding what to do with the light switch (False = OFF, True = ON). The switch is currently OFF. To turn it ON using this method, use the following calling structure: handleOneWayDeviceRPCRequests(deviceId="3806ac00-5411-11ea-aa0c-135d877fb643", remote_method="setLightSwitch", param_dict={"value": True}) The message that is going to be forward to the device as a RPC request is the following JSON structure: { "method": "setLightSwitch", "params": { "value": true } } The "method" - "params" JSON is somewhat of a format for RPC interactions, with "params" being a complex (multi-level) dictionary to allow for complex, multi-argument remote method executions """ one_way_log = ambi_logger.get_logger(__name__) # Validate inputs utils.validate_input_type(deviceId, str) utils.validate_id(entity_id=deviceId) utils.validate_input_type(remote_method, str) if param_dict: utils.validate_input_type(param_dict, dict) # The rest is pretty much more of the same. This one gets the deviceId built in the calling URL service_endpoint = '/api/plugins/rpc/oneway/' + deviceId # Crete the data payload in the dictionary format data = { "method": str(remote_method), "params": param_dict } service_dict = utils.build_service_calling_info(mac.get_auth_token(user_type="tenant_admin"), service_endpoint=service_endpoint) # Done. Set things in motion then try: response = requests.post(url=service_dict["url"], headers=service_dict["headers"], data=json.dumps(data)) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format(str(service_dict['url'])) one_way_log.error(error_msg) raise ce return response.status_code
def get_device_credentials(device_name): """ This method simplifies a recurring task: determining the entityType, entityId and timeseriesKeys associated to the device identified by 'device_name'. Since most remote services provided by the Thingsboard remote API are fond of using this entityType/entityId pair to uniquely identify every entity configured in the platform (tenants, customers, devices, etc.) but us humans are more keen to rely on names to identify the same entities, this method integrates both approaches for the devices context: it searches the current database entries for the name provided and, if a unique result is obtained, returns its associated entityType, entityId and timeseriesKeys. @:param device_name (str) - The name of the device, as it should be defined via the 'name' field in the respective Thingsboard installation @:raise utils.InputValidationException - If the input fails initial validation @:raise mysql_utils.MySQLDatabaseException - If the database access incurs in problems @:return None if no single device could be found or if there are no credentials currently configured in the database for this device. Otherwise, the credentials are going to be returned in the following format: device_credentials = (entityType (str), entityId (str), timeseriesKeys(list of str)) """ log = ambi_logger.get_logger(__name__) utils.validate_input_type(device_name, str) # Create the typical database access objects database_name = user_config.access_info['mysql_database']['database'] table_name = proj_config.mysql_db_tables['devices'] cnx = mysql_utils.connect_db(database_name=database_name) select_cursor = cnx.cursor(buffered=True) # Now for the tricky part: to run a SQL SELECT using the device name as the only filter and ensuring than one and only one record is returned. For this I'm going to use the SQL 'LIKE' operator and using all four wildcard options in the search # field: 'device_name', '%device_name', 'device_name%' and '%device_name%'. search_vector = [device_name, '%' + device_name, device_name + '%', '%' + device_name + '%'] sql_select = """SELECT entityType, id, timeseriesKeys FROM """ + table_name + """ WHERE name LIKE %s;""" # Run a SQL SELECT with all elements of the search vector, stopping the loop as soon as a single record gets returned back result = None for i in range(0, len(search_vector)): select_cursor = mysql_utils.run_sql_statement(cursor=select_cursor, sql_statement=sql_select, data_tuple=(search_vector[i],)) if select_cursor.rowcount != 1: log.warning("Searching for name = {0}. Got {1} results back. Expanding search with new wildcards...".format(str(search_vector[i]), str(select_cursor.rowcount))) # Ignore the rest of this loop in this case then continue else: log.info("Got an unique record while searching for name = {0}. Moving on".format(str(search_vector[i]))) # Set the result parameter result = select_cursor.fetchone() # Get out of this loop in this case break # Check if a valid result was found. Continue if so, return None otherwise if result is None: log.warning("Could not retrieve an unique record for device_name = {0} in {1}.{2}. Nothing more to do...".format(str(device_name), str(database_name), str(table_name))) return None else: # Extract the timeseriesKey string to a list of strings and prepare the return tuple timeseriesKeys_list = [] # The list of timeseries keys is kept in the database as single string with all elements separated with a comma: timeseriesKeys = "timeseriesKey1,timeseriesKey2,...,timeseriesKeyN" tokens = result[2].split(',') for i in range(0, len(tokens)): timeseriesKeys_list.append(tokens[i]) # All done. Return the final structure then return result[0], result[1], timeseriesKeys_list
def handleTwoWayDeviceRPCRequest(deviceId, remote_method, param_dict=None): """POST method to execute bidirectional RPC calls to a specific method and a specific device, all identified through the argument list passed to this method. If the method to execute requires arguments itself, use the param_dict argument to provide a dictionary with these arguments, in a key-value scheme (key is the name of the argument with the actual value passes as... value). This method implements a reply-response dynamic, as opposed to the previous 'fire-and-forget' approach. The server submits a RPC requests and then listens for a response with the same requestId in the response topic. The method blocks for a while until a valid response can be returned. The usage of this method is identical to the previous one. @:param deviceId (str) - The typical 32-byte, dash separated hexadecimal string that uniquely identifies the device in the intended ThingsBoard installation. @:param remote_method (str) - The name of the method that is defined client-side that is to be executed with this call. @:param param_dict (dict) - If the remote method requires arguments to be executed, use this dictionary argument to provide them. @:raise utils.InputValidationException - If any of the input fails initial validation @:raise utils.ServiceEndpointException - If the request was not properly executed @:return response (request.models.Response) - This method returns the object that comes back from the HTTP request using the 'requests' package. This object has loads of interesting information regarding the original request that created it. For this context, the most relevant fields are response.status_code (int), which has the HTTP response code of the last execution, and the response.text, which contains the response of the method that was called, if any. If the remote method doesn't return anything, this field comes back empty. """ two_way_log = ambi_logger.get_logger(__name__) # Validate inputs utils.validate_input_type(deviceId, str) utils.validate_id(entity_id=deviceId) utils.validate_input_type(remote_method, str) if param_dict: utils.validate_input_type(param_dict, dict) # Set the endpoint service_endpoint = '/api/plugins/rpc/twoway/' + deviceId # Create the data payload as a dictionary data = { "method": str(remote_method), "params": param_dict } service_dict = utils.build_service_calling_info(mac.get_auth_token(user_type="tenant_admin"), service_endpoint=service_endpoint) # Send the request to the server. The response, if obtained, contains the response data try: response = requests.post(url=service_dict['url'], headers=service_dict['headers'], data=json.dumps(data)) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format(str(service_dict['url'])) two_way_log.error(error_msg) raise ce # Watch out for HTTP 408. In this project, when that code is returned (server timeout), it normally means that communication with the remote device was not established properly, by whatever reason if response.status_code == 408: error_msg = "Received a HTTP 408 - Server timed out. Could not get a response from device with id = {0}...".format(str(deviceId)) two_way_log.error(error_msg) raise utils.ServiceEndpointException elif response.status_code == 200 and response.text == "": warn_msg = "Received a HTTP 200 - OK - But no response was returned..." two_way_log.warning(warn_msg) # Send back the result as a string, because that's what this method returns return "200" else: return response.text
def validate_database_table_name(table_name): """This simple method receives a name of a table and validates it by executing a SQL statement in the default database to retrieve all of its tables and then checks if the table name in the input does match any of the returned values. @:param table_name (str) - The name of the database table whose existence is to be verified @:raise utils.InputValidationException - If the inputs fail initial validation @:raise PostgresDatabaseException - If any error occur while executing database bounded operations or if the table name was not found among the list of database tables retrieved @:return True (bool) - If table_name is among the database tables list""" validate_db_table_log = ambi_logger.get_logger(__name__) # Validate the input utils.validate_input_type(table_name, str) # Get the default database name database_name = user_config.mysql_db_access['database'] # Create the database interface elements cnx = connect_db(database_name=database_name) select_cursor = cnx.cursor(buffered=True) # Prepare the SQL statement sql_select = """SELECT tables.table_name FROM information_schema.TABLES;""" # And execute it select_cursor = run_sql_statement(select_cursor, sql_select, ()) # Check the data integrity first if not select_cursor.rowcount: error_msg = "The SQL statement '{0}' didn't return any results! Exiting...".format( str(select_cursor.query)) validate_db_table_log.error(error_msg) select_cursor.close() cnx.close() raise PostgresDatabaseException(message=error_msg) # If results were gotten else: # Grab the first one result = select_cursor.fetchone() # And run the next loop until all results were checked (result would be set to None once all the data retrieved from the database is exhausted) while result: # If a match is found if result[0] == table_name: # Return the response immediately return True # Otherwise else: # Grab the next one and run another iteration of this result = select_cursor.fetchone() # If I got here it means none of the results matched the table_name provided. Nothing more to do than to inform that the table name is not valid raise PostgresDatabaseException( message= "The table provided '{0}' is not among the current database tables!" .format(str(table_name)))
def _get_new_session_token(connection_dict): """ Lets start with the basics. This method receives a dictionary object such as the one set in the config.py file, extracts the necessary connection parameters and, if all is OK, sends an HTTP request for a valid session token @:param connection_dict - a dictionary containing the host, port, username and password to connect successfully to a thingsboard instance @:return if successful, returns a string with an authentication token @:raise InvalidAuthenticationData if not successful (needs to be properly catch somewhere above) """ new_session_log = ambi_logger.get_logger(__name__) # Validate the provided connection dictionary before anything else validate_connection_dict(connection_dict) # Build the elements of the POST command to request the authentication token. All of the remaining data, headers and # such, are defined by default from the standard curl command to retrieve the authentication tokes con_url = str(connection_dict["host"]) + ":" + str( connection_dict["port"]) + authentication_API_paths["token"] con_headers = { "Content-Type": "application/json", "Accept": "application/json" } # NOTE: The syntax for the authentication POST request is a con_data = '{"username": "******"username"]) + '", "password": "******"password"]) + '"}' # And here's the main call to the remote server, this time using the 'requests' package instead of curl or other # utility. The structure of the command is slightly different but in the end it yields to the same. new_session_log.info("Requesting {0}...".format(str(con_url))) try: response = requests.post(url=con_url, data=con_data, headers=con_headers) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout): error_msg = "Unable to establish a connection with {0}. Exiting...".format( str( str(user_config.thingsboard_host) + ":" + str(user_config.thingsboard_port))) new_session_log.error(error_msg) raise AuthenticationException(error_msg) # If a non OK response is returned, treat it as something abnormal, i.e, throw an AuthenticationException with its # data populated with the returned response data if response.status_code != 200: new_session_log.error(response.text) raise AuthenticationException(message=response.text, error_code=response.status_code) # The required token is returned initially in a text (string) form, but its actually a dictionary # casted into a string. So, for simplicity sake, return the returned string to its dictionary form and return it return ast.literal_eval(response.text)
def get_table_columns(database_name, table_name): """This method does a simple SELECT query to the database for just the columns names in a given table. This is particular useful for building INSERT and UPDATE statements that require a specification of these elements on the statements @:param database (str) - The name of the database to connect to @:param table_name (str) - The name of the table from the database to connect to @:raise utils.InputValidationException - If any of the inputs is not valid @:raise PostgresDatabaseException - for database related exceptions @:return column_list (list of str) - A list with all the names of the columns, in order, extracted from the database.table_name """ get_table_log = ambi_logger.get_logger(__name__) try: utils.validate_input_type(database_name, str) utils.validate_input_type(table_name, str) cnx = connect_db(database_name=database_name) select_cursor = cnx.cursor() sql_select = """SELECT column_name FROM information_schema.columns WHERE table_name = %s;""" data_tuple = (table_name, ) select_cursor = run_sql_statement(cursor=select_cursor, sql_statement=sql_select, data_tuple=data_tuple) if not select_cursor.rowcount: error_msg = "Did not retrieve any column information from {0}.{1}. Cannot continue!".format( str(database_name), str(table_name)) get_table_log.error(error_msg) raise PostgresDatabaseException(message=error_msg) # The last statement, if well executed, returns a list of one element tuples (because the SELECT statement only specified one field in this case). To transform this into a proper list I have to go through all elements of the list as get # them out of the tuple, basically... # Grab the full list first result_list = select_cursor.fetchall() return_list = [] # And do the deed then for result in result_list: return_list.append(result[0]) # Done. Send it back then return return_list except psycopg2.Error as err: get_table_log.error( "Got a code {0} error with message: {1} when connecting to {2}.{3}" .format(str(err.pgcode), str(err.pgerror), str(database_name), str(table_name))) raise PostgresDatabaseException(message=err.pgerror, error_code=err.pgcode, diag=err.diag)
def print_dictionary(dictionary, tabs=1): """ A method to print out the contents of a dictionary. Useful for debugging since it uses plenty of newlines to make the dictionary contents more readable @:param dictionary - The dictionary object to print @:param tabs - default parameter used to control the tabulation level between recursive calls to this function. Setting it to a value other than 1, the default, simply shifts the printed result by those many tabs to the right @:return None""" print_log = ambi_logger.get_logger(__name__) # Quick check for consistency try: validate_input_type(dictionary, dict) except InputValidationException as ive: print_log.error(ive.message) raise ive # Start by getting all the keys in the dictionary in a handy list dict_keys = list(dictionary.keys()) for t in range(0, tabs - 1): print("\t", end='') print("{") for i in range(0, len(dict_keys) - 1): # Deal with dictionaries inside of dictionaries in a recursive fashion if type(dictionary[dict_keys[i]]) == dict: for t in range(0, tabs): print("\t", end='') print(str(dict_keys[i]) + ": ", end='') print_dictionary(dictionary[dict_keys[i]], tabs=tabs + 1) continue for t in range(0, tabs): print("\t", end='') print(str(dict_keys[i]) + ": " + str(dictionary[dict_keys[i]]) + ",") for t in range(0, tabs): print("\t", end='') print( str(dict_keys[len(dict_keys) - 1]) + ": " + str(dictionary[dict_keys[len(dict_keys) - 1]])) for t in range(0, tabs): print("\t", end='') print("}")
def get_device_types(): """ Simplest method around. Useful for basic tests. This method simply executes and returns the results of the API call to the corresponding ThingsBoard API remote service. :return response: (dict) A dictionary with all the supported device types """ log = ambi_logger.get_logger(__name__) response = tb_device_controller.getDeviceTypes() if response.status_code != 200: error_msg = "Received a HTTP {0} with the message {1}".format(str(response.status_code), str(eval(response.text)['message'])) log.error(msg=error_msg) raise utils.ServiceEndpointException(message=error_msg) return eval(utils.translate_postgres_to_python(response.text))
def validate_sql_input_lists(column_list, table_name, trigger_column=False): """Since I need to repeat a series of validation steps for several SQL statement building methods that I'm writing, I might as well abstract the whole thing in a method to save precious hours of typing the same thing over and over again. @:param column_list (list of str) - a list with the names of the MySQL database columns whose information is to be added to @:param table_name (str) - The name of the table where the SQL statement is going to be executed @:param trigger_column (str) - An optional parameter given than only the UPDATE and DELETE statements use it (the WHERE trigger_column condition part of the statement goes in) @:return True (bool) - if the data is able to pass all validations @:raise utils.InputValidationException - If the input arguments are invalid @:raise Exception - For any other error types""" validate_sql_log = ambi_logger.get_logger(__name__) try: utils.validate_input_type(column_list, list) utils.validate_input_type(table_name, str) if trigger_column: utils.validate_input_type(trigger_column, str) for column in column_list: utils.validate_input_type(column, str) except utils.InputValidationException as ive: validate_sql_log.error(ive.message) raise ive if len(column_list) <= 0: error_msg = "The column list is empty!" validate_sql_log.error(error_msg) raise PostgresDatabaseException(message=error_msg) # If a trigger_column was provided, check if it is among the full list elements if trigger_column and trigger_column not in column_list: error_msg = "The trigger column provided ({0}) was not found in table {1}".format( str(trigger_column), str(table_name)) validate_sql_log.error(error_msg) raise PostgresDatabaseException(message=error_msg) # Finally, use the get_table_column methods to return the list of columns for that table in question from the default database as check if all elements in the column list provided are indeed in the list returned from the database database_name = user_config.postgres_db_access['database'] full_column_list = get_table_columns(database_name=database_name, table_name=table_name) for column in column_list: if column not in full_column_list: error_msg = "The column '{0}' provided in the argument list is not among the columns for {1}.{2}. Cannot continue!".format( str(column), str(database_name), str(table_name)) validate_sql_log.error(error_msg) raise PostgresDatabaseException(message=error_msg) # All is good with my data. Send back an OK return True
def getSecuritySettings(): """ Simple GET method to retrieve the current administration security settings. These can be also consulted in the ThingsBoard Admin dashboard under 'System Settings' -> 'Security Settings' @:type user_types allowed for this service: SYS_ADMIN (NOTE: This service is NOT AVAILABLE in the ThingsBoard remote installation at 62.48.174.118) @:param auth_token - A valid admin authorization token @:return { "maxFailedLoginAttempts": 0, "passwordPolicy": { "minimumDigits": 0, "minimumLength": 0, "minimumLowercaseLetters": 0, "minimumSpecialCharacters": 0, "minimumUppercaseLetters": 0, "passwordExpirationPeriodDays": 0, "passwordReuseFrequencyDays": 0 }, "userLockoutNotificationEmail": "string" } """ security_set_log = ambi_logger.get_logger(__name__) service_endpoint = "/api/admin/securitySettings" service_dict = utils.build_service_calling_info( mac.get_auth_token(user_type='sys_admin'), service_endpoint) try: response = requests.get(url=service_dict["url"], headers=service_dict["headers"]) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format( str(service_dict['url'])) security_set_log.error(error_msg) raise ce # There's a possibility that a non-admin authorization token can be used at this point. If that's the case, the request will return the appropriate response if response.status_code == 403 and ast.literal_eval( response.text)["errorCode"] == 20: # Throw the relevant exception if that is the case error_msg = "The authorization token provided does not have admin privileges!" security_set_log.error(error_msg) raise utils.AuthenticationException(message=error_msg, error_code=20) return response
def extract_all_key_value_pairs_from_dictionary(input_dictionary, current_value_dict=None): """This method is but the counterpart of the extract_all_keys_from_dictionary one. Same principle, same reason and almost same logic: I need an expanded list of all the values in a given dictionary which, in the considered case, can have multiple levels, i.e., values that are dictionaries. The most efficient way to get all values of a dictionary into a linear data type, such as a list, is by employing recursivity to explore all dictionary levels. But in this case, the return element is going to be a list of tuples, in the format (key, value), because this method is going to be used to populate database tables where the keys of the input dictionary were used to name the database columns verbatim @:param input_dictionary (dict) - The dictionary whose keys values pair I want to extract @:param current_value_list (list) - The list of values gathered so far. Since I'm calling this method recursively, I need to provide the state of the process to the next iteration of the method. The current_value_list list is going to be use for just that @:return current_value_dict (dict) - The dictionary that is going to contain the partial results for when this method is executed recursively @:raise utils.InputValidationException - If an input argument fails its data type validation """ if current_value_dict is None: current_value_dict = {} extract_val_logger = ambi_logger.get_logger(__name__) # Validate the input dict and the current_value list then try: validate_input_type(input_dictionary, dict) validate_input_type(current_value_dict, dict) except InputValidationException as ive: extract_val_logger.error(ive.message) raise ive # I need to iterate through all keys, so I need them in a list for now (the current level of them anyhow) current_level_key_list = list(input_dictionary.keys()) for key in current_level_key_list: # If a sub dictionary is detected if type(input_dictionary[key]) == dict: # Call this function again with the sub dictionary instead extract_all_key_value_pairs_from_dictionary( input_dictionary=input_dictionary[key], current_value_dict=current_value_dict) # Otherwise, just keep appending values to the current_value_list else: if input_dictionary[key] is 'DEVICE_PROFILE': return current_value_dict current_value_dict[key] = input_dictionary[key] # Once the for is done, I'm also too. Send the list back return current_value_dict
def getAssetTypes(): """This method employs the same logic as all the ThingsBoard interface methods so far: it creates the necessary endpoint, gather any arguments necessary for the service call, places a GET request to the remote API and returns the response, in a dictionary data structure, as usual @:type user_types allowed for this service: TENANT_ADMIN, CUSTOMER_USER @:raise utils.ServiceEndpointException - If errors occur when accessing the remote API @:return result (dict) - A dictionary in the following format (dictated by the remote API): result = [ { "tenantId": { "entityType": <str>, "id": <str> }, "entityType": <str>, "type": <str> } ] This dictionary contains a list of the information associated to each asset as a whole (this dictionary doesn't reveal how many assets exist in the database, just the types that were defined so far) """ # The usual log asset_types_log = ambi_logger.get_logger(__name__) # Base endpoint string service_endpoint = "/api/asset/types" # Build the full service dictionary for the executing the remote call service_dict = utils.build_service_calling_info(mac.get_auth_token(user_type='tenant_admin'), service_endpoint) try: response = requests.get(url=service_dict['url'], headers=service_dict['headers']) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a request from {0}...".format(str(service_dict['url'])) asset_types_log.error(error_msg) raise ce # Check the status code of the HTTP response before moving forward if response.status_code != 200: error_msg = "Request unsuccessful: Received an HTTP {0} with message {1}.".format(str(eval(response.text)['status']), str(eval(response.text)['message'])) asset_types_log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) else: # Return the response return response
def getDeviceTypes(): """ Simple GET method to retrieve the list of all device types stored in the ThingsBoard platform @:type user_types allowed for this service: TENANT_ADMIN, CUSTOMER_USER @:return standard request response """ device_types_log = ambi_logger.get_logger(__name__) # The service endpoint to call service_endpoint = "/api/device/types" # Get the standard request elements service_dict = utils.build_service_calling_info(mac.get_auth_token(user_type='tenant_admin'), service_endpoint) # Execute the service call try: response = requests.get(url=service_dict["url"], headers=service_dict["headers"]) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format(str(service_dict['url'])) device_types_log.error(error_msg) raise ce return response
def compare_sets(set1, set2): """I needed to create this method to deal with the myriad of problems that I've encountered when comparing datasets from different sources (different databases in this case) while trying to assert their equality. Python does offer some powerful tool in that matter (perhaps too powerful given this case) but they turned out to be too 'strict' in some cases, resulting in unexpected False comparisons when the de-facto elements were the same. The problem arises when one of the databases decides to return an int number cast as a string while the other sends the same number, from the same record and under the same column, but as an int type instead of a string. In python 123 != '123' and so that is enough to invalidate the whole operation. Given the impositions of Python, I reckon that the best approach is to do a item by item comparison with both items cast to str (string) before. This is because a str cast on an string doesn't do anything (as expected) but any other data type do has a str 'version'. In other words, every datatype in Python can be cast as a string but not every string can be casted as something else (doing int('123Ricardo456') raises an ValueError Exception) @:param set1 (list) - One of the sets of results to be compared @:param set2 (list) - The other set to be compared with set1 @:return True/False (bool) - depending on how the comparison goes. If all elements of set1 are functionally equal to set2 (regardless if they are casted into str or not), return True, otherwise, once a mismatch is detected, return False @:raise utils.InputValidationException - If the input sets are not lists""" compare_log = ambi_logger.get_logger(__name__) try: validate_input_type(set1, list) validate_input_type(set2, list) except InputValidationException as ive: compare_log.error(ive.message) raise ive # It only makes sense to compare the lists if they are, at least, of the same size error_msg = None if len(set1) <= 0 or len(set2) <= 0: error_msg = "One of the sets provided is empty! Set1 size: {0} elements. Set2 size: {1} elements.".format( str(len(set1)), str(len(set2))) elif len(set1) != len(set2): error_msg = "The sets provided for comparison have different number of elements! Set1: {0} items while Set2: {1} items.".format( str(len(set1)), str(len(set2))) if error_msg: compare_log.error(error_msg) raise InputValidationException(message=error_msg) for i in range(0, len(set1)): # Cast both elements of the sets to compare to string if str(set1[i]) != str(set2[i]): # And if a single mismatch is found, finish the method by returning False return False # If I made it to the end of the for loop, the sets are identical. Return True instead then return True
def connect_db(database_name): """Basic method that return a connection object upon a successful connection attempt to a database whose connection data is present in the configuration file, as a dictionary with the database name as its key NOTE: This method assumes a single server instance for the installation of all database data (a single triplet hostname, username and password). For databases that spawn over multiple servers or to support more than one user in this regard, please change this method accordingly @:param database_name (str) - The name of the database to connect to. @:raise util.InputValidationException - If the input arguments provided are invalid @:raise Exception - For any other occurring errors @:return cnx (mysql.connector.connection.MySQLConnection) - An active connection to the database""" connect_log = ambi_logger.get_logger(__name__) try: utils.validate_input_type(database_name, str) connection_dict = user_config.access_info['postgres_database'] except utils.InputValidationException as ive: connect_log.error(ive.message) raise ive except KeyError as ke: error_msg = "Missing '{0}' key from the user_config.postgres_db_access dictionary!".format( str(database_name)) connect_log.error(error_msg) raise ke try: cnx = psycopg2.connect(user=connection_dict['username'], password=connection_dict['password'], database=connection_dict['database'], host=connection_dict['host'], port=connection_dict['port']) except psycopg2.Error as err: connect_log.error( "Got a code {0} error with message: {1} when connecting to {2}:{3}.{4}" .format(str(err.pgcode), str(err.pgerror), str(connection_dict['host']), str(connection_dict['port']), str(connection_dict['database']))) # Catch any errors under a generic 'Error' exception and pass it upwards under a more specific MySQLDatabaseException raise PostgresDatabaseException(message=err.pgerror, error_code=err.pgcode, diag=err.diag) return cnx
def build_service_calling_info(auth_token, service_endpoint): """ Basic method to automatize the headers and url parts of GET and POST requests. This step is common to all services, therefore it makes sense to abstract it. NOTE: This method returns only url and headers information. The 'data' parameter is usually specific to each service and thus needs to be build in the service call @:param auth_token - a valid authorization token for the service @:param endpoint - Service endpoint @:return { "headers": { "Content-Type" : string, "Accept": string, "X-Authorization": string }, "url": string } """ # As usual, check the input data first build_log = ambi_logger.get_logger(__name__) try: validate_input_type(auth_token, str) validate_input_type(service_endpoint, str) except InputValidationException as ive: build_log.error(ive.message) raise ive # If its all good, return a dictionary with the standard data filled in return { "headers": { "Content-Type": "application/json", "Accept": "application/json", "X-Authorization": "Bearer " + str(auth_token) }, "url": str(user_config.access_info['host']) + ":" + str(user_config.access_info['port']) + service_endpoint }
def translate_postgres_to_python(response_text): """This method deals with the idiosyncrasies between the 'things' expected by python against what it is expected from the databases (both MySQL and PostGres) that can potentially cause really annoying bugs. So far, I've identified the following potential problems: Python used None to denote empty values while the databases use NULL Python's booleans are 'True' and 'False' while PostGres uses 'true' and 'false' These differences, though subtle, can raise Exceptions. As such, this method receives the response from the API side in str form and used this advantage to use the str.replace(old, new) to correct these problems. Since I have to do this for every API interacting method, I might as well right a method about it. @:param response_text (str) - The text parameter from the response obtained, as it, from the HTTP request to the remote API @:return response (str) - The same response with the offending terms replaced (so that eval and other parsing functions can be used at will) @:raise InputValidationException - If the input fails validation """ trans_postgres_log = ambi_logger.get_logger(__name__) try: validate_input_type(response_text, str) except InputValidationException as ive: trans_postgres_log.error(ive.message) raise ive # The rest is easy return response_text.replace('null', 'None').replace('true', 'True').replace( 'false', 'False')
def _refresh_authorization_tokens(admin=False): """ This method simply does a call to the get_new_session_token to get a new set of fresh authorization and refresh tokens from the server. This is just to abstract the following code a bit more since I've realised that I'm calling the aforementioned method all the time @:param admin - a boolean indicating if the tokens to be refreshed are for an admin profile (admin=True) or a regular profile(admin=False). Default option is always 'regular user' (admin=False) @:raise AuthenticationException - if the admin argument is empty or not a boolean @:return a dictionary with a fresh set of authorization and refresh tokens { 'token': string, 'refreshToken': string } """ refresh_log = ambi_logger.get_logger(__name__) try: validate_input_type(admin, bool) except InputValidationException as ive: refresh_log.error(ive.message) raise ive if admin: return _get_new_session_token(user_config.access_info['sys_admin']) else: return _get_new_session_token(user_config.access_info['tenant_admin'])
def update_tenants_table(): """This method is the database version, of sorts, of the tenant_controller functions, namely the getTenants() one. This method uses the later function to get data about the current tenants in the corresponding database table, checks the existing tenant data and acts accordingly: new tenants are inserted as a new record in the database, missing tenants are deleted and modified tenants get their records updated. This methods does all this through sub methods that add, delete and update tenant records (so that, later on, one does not become restricted to this only method to alter the tenants table. Any of the other, more atomized methods can be used for more precise operation in the database""" # Fetch the data from the remote API. Set a high value for the limit argument. If it still are results left to return, this method call prints a warning log about it. Change this value accordingly if that happens # The eval command casts the results to the base dictionary returned # Get the response object from the API side method # The key that I need to use to retrieve the correct table name for where I need to insert the tenant data module_table_key = 'tenants' limit = 50 response = tb_tenant_controller.getTenants(limit=limit) response_dict = eval(utils.translate_postgres_to_python(response.text)) # Before processing the results, check if all of them were returned in the last call and warn the user otherwise if response_dict['hasNext']: # Create the logger first. Since I only needed for this single instance, in this particular case the logger is going to be circumscribed to this little if clause update_tenants_log = ambi_logger.get_logger(__name__) update_tenants_log.warning("Not all results from the remote API were returned on the last call (limit = {0}). Raise the limit parameter to retrieve more".format(str(limit))) # Extract just the part that I'm concerned with tenant_list = response_dict['data'] # Each element in the tenant list is a tenant. Process them one by one then using the insert and update functions. Actually, the way I wrote these functions, you can call either of them since their internal logic decides, # based on what's already present in the database, what is the best course of action (INSERT or UPDATE) for tenant in tenant_list: # Two things that need to be done before sending the data to the database: expand any sub-level in the current tenant dictionary tenant = utils.extract_all_key_value_pairs_from_dictionary(input_dictionary=tenant) # And replace any POSIX-type timestamps for the MySQL friendly DATETIME type try: tenant['createdTime'] = mysql_utils.convert_timestamp_tb_to_datetime(timestamp=tenant['createdTime']) except KeyError: # Ignore if this key doesn't exist in the tenant dictionary pass database_table_updater.add_table_data(tenant, proj_config.mysql_db_tables[module_table_key])
def validate_connection_dict(connection_dict): """ This method receives a dictionary with connection credentials and checks if the structure is sound @:param connection_dict - a dictionary containing access credentials @:raise AuthenticationException - in case that the dictionary provided does not has the expected structure @:return True - if the provided dictionary is well formed""" validate_log = ambi_logger.get_logger(__name__) try: validate_input_type(connection_dict, dict) except InputValidationException as ive: validate_log.error(ive.message) raise ive error_msg = None error_code = 0 # Validate the various aspects expected from the connection dictionary if len(connection_dict) != 4: error_msg = "Wrong number of elements in the authentication dicitionary: {0}".format( str(len(connection_dict))) # Try to get a host from the provided dictionary. If it return None, throw the exception. The rest falls alike elif not connection_dict.get("host"): error_msg = "Please provide a valid host key in the authentication dictionary" elif not connection_dict.get("port"): error_msg = "Please provide a valid port key in the authentication dictionary" elif not connection_dict.get("username"): error_msg = "Please provide a valid username key in the authentication dictionary" elif not connection_dict.get("password"): error_msg = "Please provide a valid password key in the authentication dictionary" if error_msg: # If the error message was set, log its contents as ERROR validate_log.error(error_msg) # If not, then at this point I should have the error message that disqualified the dictionary in the first place raise AuthenticationException(error_msg, error_code)
def update_asset_devices_table(): """Use this method to fill out the asset devices table that corresponds devices to the assets that are related to them. The idea here is to use ASSETs to represent spaces and the ThingsBoard relation property to associate DEVICEs to those assets as a way to represent the devices currently installed and monitoring that space @:raise mysql_utils.MySQLDatabaseException - For problems related with the database access @:raise utils.ServiceEndpointException - For issues related with the remote API call @:raise utils.AuthenticationException - For problems related with the authentication credentials used""" asset_devices_log = ambi_logger.get_logger(__name__) asset_devices_table_name = proj_config.mysql_db_tables['asset_devices'] assets_table_name = proj_config.mysql_db_tables['tenant_assets'] devices_table_name = proj_config.mysql_db_tables['devices'] # Fire up the database access objects database_name = user_config.access_info['mysql_database']['database'] cnx = mysql_utils.connect_db(database_name=database_name) outer_select_cursor = cnx.cursor(buffered=True) inner_select_cursor = cnx.cursor(buffered=True) mysql_utils.reset_table(table_name=asset_devices_table_name) # First get a list of all the assets supported so far. Refresh the asset database table first of all mysql_asset_controller.update_tenant_assets_table() # And the devices table too since I need data from there too later on mysql_device_controller.update_devices_table() # And grab all assets ids, names and types (I need those for later) sql_select = """SELECT id, name, type FROM """ + str( assets_table_name) + """;""" # Execute the statement with the select cursor outer_select_cursor = mysql_utils.run_sql_statement( cursor=outer_select_cursor, sql_statement=sql_select, data_tuple=()) # Check if any results came back if outer_select_cursor.rowcount <= 0: error_msg = "Unable to get any results from {0}.{1} with '{2}'...".format( str(database_name), str(assets_table_name), str(outer_select_cursor.statement)) asset_devices_log.error(error_msg) outer_select_cursor.close() inner_select_cursor.close() cnx.close() raise mysql_utils.MySQLDatabaseException(message=error_msg) # Got some results. Process them then else: # For each valid ASSET Id found in this table, run a query in the ThingsBoard side of things for all DEVICEs that have a relation to that asset and send it to the database asset_info = outer_select_cursor.fetchone() # Set the common used parameters for the entity-relation call entityType = "ASSET" relationTypeGroup = "COMMON" direction = "FROM" # Run this while there are still asset ids to process while asset_info: # Query for related devices api_response = tb_entity_relation_controller.findByQuery( entityType=entityType, entityId=asset_info[0], relationTypeGroup=relationTypeGroup, direction=direction) # Get rid of all non-Python terms in the response dictionary and cast it as a list too relation_list = eval( utils.translate_postgres_to_python(api_response.text)) # Now lets format this info accordingly and send it to the database for relation in relation_list: # Create a dictionary to store all the data to send to the database. For now its easier to manipulate one of these and cast it to a tuple just before executing the statement data_dict = { "fromEntityType": relation["from"]["entityType"], "fromId": relation["from"]["id"], "fromName": asset_info[1], "fromType": asset_info[2], "toEntityType": relation["to"]["entityType"], "toId": relation["to"]["id"], "toName": None, "toType": None, "relationType": relation["type"], "relationGroup": relation["typeGroup"], } # As always, take care with the stupid 'description'/'additionalInfo' issue... if relation["additionalInfo"] is not None: try: # Try to get a 'description' from the returned dictionary from the 'additionalInfo' sub dictionary data_dict["description"] = relation["additionalInfo"][ "description"] # If the field wasn't set, instead of crashing the code except KeyError: # Simply set this field to None and move on with it... data_dict["description"] = None else: data_dict["description"] = None # And now to get the data to use in the INSERT statement. For that I need to do a quick SELECT first since I need info from a different side too if data_dict['toEntityType'] == 'DEVICE': sql_select = """SELECT name, type FROM """ + str( devices_table_name) + """ WHERE id = %s;""" elif data_dict['toEntityType'] == 'ASSET': sql_select = """SELECT name, type FROM """ + str( assets_table_name) + """ WHERE id = %s;""" data_tuple = (relation['to']['id'], ) # NOTE: I need to use the change cursor because my select_cursor still has unprocessed results from the previous SELECT execution and using it would delete those inner_select_cursor = mysql_utils.run_sql_statement( cursor=inner_select_cursor, sql_statement=sql_select, data_tuple=data_tuple) # Check if any results were returned. Use them if so, otherwise replace the missing results with 'Unknown'. In this case there's no point in raising Exceptions if I can't find the actual name or type of the related device if inner_select_cursor.rowcount > 0: # I got the data I was looking for from the devices table result = inner_select_cursor.fetchone() data_dict["toName"] = result[0] data_dict["toType"] = result[1] # Write the data in the database table mysql_utils.add_data_to_table( table_name=asset_devices_table_name, data_dict=data_dict) # Finished with the current asset. Fetch the next one and repeat the cycle if its not None asset_info = outer_select_cursor.fetchone() # Done with everything, I believe. Close the database access objects and carry on outer_select_cursor.close() inner_select_cursor.close() cnx.close()
def add_table_data(data_dict, table_name): """ Method that abstracts the insertion of data into the provided database table. The method maps the provided data dictionary to any available column in the table identified in table name. The method uses the table data as main reference, i.e., it only writes data whose key in the data dictionary has a direct correspondence to a table column in the database. If more the data dictionary has more keys/items than available columns, an log warning is issued about it but the method carries on writing in all available data. @:param data_dict (dict) - A dict structure, i.e., a key-value arrangement with the data to be added/updated into the database table. IMPORTANT: The table columns name were prepared such that there's a one-to-one equivalence between them and the expected keys in the data dictionary. @:param table_name (str) - The name of the database where the data dict has to be written into. @:raise utils.InputValidationException - If any of the inputs fails initial validation. @:raise mysql_utils.MySQLDatabaseException - If any issues occur with the database accesses. @:return result (bool) - If the database addition/update was performed successfully, this method returns True. Otherwise, the appropriate exception is raised with the details on why it was raised in the first place. """ log = ambi_logger.get_logger(__name__) # Validate inputs utils.validate_input_type(data_dict, dict) utils.validate_input_type(table_name, str) # Prepare the database access objects database_name = user_config.access_info['mysql_database']['database'] cnx = mysql_utils.connect_db(database_name=database_name) select_cursor = cnx.cursor(buffered=True) change_cursor = cnx.cursor(buffered=True) # First, check if the table exists sql_select = """SHOW tables FROM """ + str(database_name) + """;""" select_cursor = mysql_utils.run_sql_statement(cursor=select_cursor, sql_statement=sql_select, data_tuple=()) records = select_cursor.fetchall() # The result of the previous fetchall command is a list of one element tuples, hence why I'm putting the name that I want to verify its existence in such list as a one element tuple. The alternative was to format the previous list into a # single string list element, but it is way more simple this way if (table_name, ) not in records: error_msg = "The table name provided: {0} doesn't exist yet in database {1}. Cannot continue.".format( str(table_name), str(database_name)) log.error(error_msg) raise utils.InputValidationException(message=error_msg) # Okay, the table exists in the database. Get all its columns into a list column_list = mysql_utils.get_table_columns(database_name=database_name, table_name=table_name) # And create the standard INSERT statement from it sql_insert = mysql_utils.create_insert_sql_statement( column_list=column_list, table_name=table_name) # Build the respective data tuple by going through all column names and checking if there is a corresponding key in the data dictionary data_list = [] for i in range(0, len(column_list)): try: # First, try to retrieve a value into the data list by doing a direct retrieval from the data dictionary using the column name as key data_list.append(data_dict[column_list[i]]) # If the current column name doesn't have a matching key in the data dictionary, catch the expected Exception except KeyError: # And replace the missing value with a None since, by default, all table columns were created in a way where they hold such value # But warn the user first log.warning( "Didn't find any '{0}' keys in the data dictionary provided. Setting the {0} column in {1}.{2} to NULL" .format(str(column_list[0]), str(database_name), str(table_name))) # And set the value then data_list.append(None) # Done. Proceed with the INSERT try: change_cursor = mysql_utils.run_sql_statement( cursor=change_cursor, sql_statement=sql_insert, data_tuple=tuple(data_list)) # Check the outcome of the previous execution. If no columns were changed in the previous statement, raise a 'Duplicate entry' Exception to trigger an UPDATE instead if change_cursor.rowcount is 0: # No changes to the database table detected. Trigger an UPDATE then raise mysql_utils.MySQLDatabaseException( message=proj_config.double_record_msg) elif change_cursor.rowcount == 1: # In this case, all went well. Close the database access objects, commit the changes to the database, inform the user of this and move on log.info("Successfully added a new record to {0}.{1}".format( str(database_name), str(table_name))) cnx.commit() select_cursor.close() change_cursor.close() cnx.close() return True # Watch out for the typical "Duplicate entry" exception except mysql_utils.MySQLDatabaseException as mse: if proj_config.double_record_msg in mse.message: trigger_column_list = mysql_utils.get_trigger_columns( table_name=table_name) # Cool. Use this data to get the respective UPDATE statement sql_update = mysql_utils.create_update_sql_statement( column_list=column_list, table_name=table_name, trigger_column_list=trigger_column_list) # And complete the existing data list by appending to it the values corresponding to the elements in the trigger list for trigger_column_name in trigger_column_list: try: data_list.append(data_dict[trigger_column_name]) except KeyError: error_msg = "The value for the trigger column '{0}' cannot be found among the data dictionary elements! Cannot continue!".format( str(trigger_column_name)) log.error(error_msg) select_cursor.close() change_cursor.close() cnx.close() raise mysql_utils.MySQLDatabaseException(message=error_msg) # Done. Run the UPDATE statement, still looking for duplicate records try: change_cursor = mysql_utils.run_sql_statement( cursor=change_cursor, sql_statement=sql_update, data_tuple=tuple(data_list)) # Check the UPDATE execution status if change_cursor.rowcount is 0: # If nothing happened, the record already exists in the database. Give a bit of heads up and move on log.warning( "A record with data:\n{0}\n already exists in {1}.{2}. Nothing more to do..." .format(str(data_list), str(database_name), str(table_name))) select_cursor.close() change_cursor.close() cnx.close() return False # Else, if more than one records were modified if change_cursor.rowcount != 1: error_msg = "Could not execute\n{0}\nin {1}.{2}. Cannot continue..".format( str(change_cursor.statement), str(database_name), str(table_name)) log.error(error_msg) select_cursor.close() change_cursor.close() cnx.close() raise mysql_utils.MySQLDatabaseException(message=error_msg) else: info_msg = "Updated record with successfully in {0}.{1}".format( str(database_name), str(table_name)) log.info(info_msg) cnx.commit() select_cursor.close() change_cursor.close() cnx.close() return True except mysql_utils.MySQLDatabaseException as mse: # If a duplicate result was still found with the last UPDATE execution if proj_config.double_record_msg in mse.message: # Inform the user with a warning message warn_msg = "The record with " for i in range(0, len(trigger_column_list) - 1): warn_msg += str(trigger_column_list[i]) + " = " + str( data_dict[trigger_column_list[i]]) + ", " warn_msg += str(trigger_column_list[-1]) + " = " + str(data_dict[trigger_column_list[-1]]) + " already exists in {0}.{1}. Nothing to do then..." \ .format(str(database_name), str(table_name)) log.warning(warn_msg) # Close out all database access objects select_cursor.close() change_cursor.close() cnx.close() # Nothing else that can be done here. Move out return True else: # Some other Exception was raised then select_cursor.close() change_cursor.close() cnx.close() # Forward the Exception then raise mse else: select_cursor.close() change_cursor.close() cnx.close() # Something else must have happened then. Keep on raising the Exception raise mse
def getLatestTimeseries(device_name, timeseries_keys_filter=None): """ This method is analogous to the previous one, i.e., it also retrieves Timeseries data that is associated to the device identified by 'device_name', but in this particular case only one timestamp/value pair is returned for each of the device's measurements, namely the last one recorded by the Thingsboard installation that oversees that device. This method is very useful to: 1. Determine if a device is working by retrieving the last recorded data. 2. Determine the timeseries keys associated to the device, as well the last timestamp associated to them. 3. Determine the most recent end_date possible for that device in a direct way - once this parameter is known, a more complete and insightful getTimeseries call can then be placed. @:param device_name (str) - The name of the device to which the latest associated timeseries should be retrieved by this method. @:param timeseries_keys_filter (list of str) - A list with the names of the timeseries keys to be returned by the remote API. If a valid list is provided, only data for the keys specified in this list are going to be returned. This method validates this list against any associated timeseriesKeys for the device: mismatched elements from this list are to be ignored. @:raise utils.InputValidationException - If any of the inputs fails initial validation @:raise utils.ServiceEndpointException - If errors occur when invoking any remote API services @:raise mysql_utils.MySQLDatabaseException - If errors occur when accessing the database @:return None if the API returns an empty set, otherwise returns a dictionary with the following format: device_data = { "timeseriesKey_1": [ { "ts": int, "value": str } ], "timeseriesKey_2": [ { "ts": int, "value": str } ], ... "timeseriesKey_N": [ { "ts": int, "value": str } ] } """ log = ambi_logger.get_logger(__name__) utils.validate_input_type(device_name, str) if timeseries_keys_filter: utils.validate_input_type(timeseries_keys_filter, list) for ts_key in timeseries_keys_filter: utils.validate_input_type(ts_key, str) # Grab the device credentials at this point. If the method returns anything (not None), than assume that its the following tuple: (entityType, entityId, timeseriesKeys_list) device_cred = mysql_device_controller.get_device_credentials( device_name=device_name) if device_cred is None: error_msg = "Could not get valid credentials for device '{0}'. Cannot continue...".format( str(device_name)) log.error(error_msg) raise mysql_utils.MySQLDatabaseException(message=error_msg) # Validate the timeseries keys, if any were provided keys = None if timeseries_keys_filter: valid_keys = [] for ts_filter_key in timeseries_keys_filter: # Filter out only the valid keys, i.e., the ones with a correspondence in the list returned from the database if ts_filter_key in device_cred[2]: valid_keys.append(ts_filter_key) # Check if at least one of the proposed keys made it to the valid list. If not, default to the list returned from the database (if this one is also not empty) if not len(valid_keys): log.warning( "Could not validate any of the filter keys ({0}) provided as argument!" .format(str(timeseries_keys_filter))) # Check if the timeseriesKeys element returned from the device credentials request was a single element list with an empty string inside it if len(device_cred[2]) == 1 and device_cred[2][0] == "": log.warning( "The database didn't return any valid set of timeseriesKeys. Omitting this argument in the API call" ) # Don't do anything else. The 'keys' parameter is already None. Keep it as that then else: keys = ",".join(valid_keys) # I have all I need to execute the remote API call service_endpoint = "/api/plugins/telemetry/{0}/{1}/values/timeseries".format( str(device_cred[0]), str(device_cred[1])) # Add the keys filter, if it was provided if keys is not None: service_endpoint += "?keys={0}".format(str(keys)) # Service endpoint is done. Grab the service calling dictionary service_dict = utils.build_service_calling_info( auth_token=mac.get_auth_token(user_type='tenant_admin'), service_endpoint=service_endpoint) # Execute the remote call finally try: response = requests.get(url=service_dict['url'], headers=service_dict['headers']) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout): error_msg = "Unable to establish a connection with {0}...".format( str(service_dict['url'])) log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) # Check the HTTP status code in the response if response.status_code != 200: error_msg = "Request unsuccessful: Received HTTP {0} with message {1}!".format( str(response.status_code), str(eval(response.text)['message'])) log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) else: # Send back the response already in dictionary form return eval(response.text)
def getAttributes(entityType=None, entityId=None, deviceName=None, keys=None): """ GET method to retrieve all server-type attributes configured for the device identified by the pair entityType/entityId or deviceName provided. This method requires either the entityType/entityId pair or the deviceName to be provided to execute this method. If insufficient data is provided, the relevant Exception shall be raised. :param entityType (str) - The entity type of the object whose attributes are to be retrieved. :param entityId (str) - The id string that identifies the device whose attributes are to be retrieved. :param deviceName (str) - The name of the device that can be used to retrieve the entityType/entityId. :param keys (list of str) - Each attribute returned is a key-value pair. Use this argument to provide a key based filter, i.e., if this list is set, only attributes whose keys match any of the list elements are to be returned. :raise utils.InputValidationException - If any of the inputs has the wrong data type or the method doesn't have the necessary data to execute. :raise utils.ServiceEndpointException - If problem occur when accessing the remote API :return attribute_dictionary (dict) - A dictionary with the retrieved attributes in the following format: attribute_dictionary = { 'attribute_1_key': 'attribute_1_value', 'attribute_2_key': 'attribute_2_value', ... 'attribute_N_key': 'attribute_N_value' } where the keys in the dictionary are the ontology-specific names (official names) and the respective values are the timeseries keys being measured by the device that map straight into those ontology names. If the device identified by the argument data does exist but doesn't have any attributes configured, this method returns None instead. """ log = ambi_logger.get_logger(__name__) # Validate inputs if entityId: utils.validate_id(entity_id=entityId) # The entityId seems OK but its useless unless the entityType was also provided or, at least, the deviceName, the method cannot continue if not entityType and not deviceName: error_msg = "A valid entityId was provided but no entityType nor deviceName were added. Cannot execute this method until a valid entityType/entityId or a valid deviceName is provided!" log.error(error_msg) raise utils.InputValidationException(message=error_msg) if entityType: utils.validate_entity_type(entity_type=entityType) # Again, as before, the method can only move forward if a corresponding entityId or deviceName was also provided if not entityId and not deviceName: error_msg = "A valid entityType was provided but no corresponding entityId nor deviceName. Cannot continue until a valid entityType/entityId or a valid deviceName is provided!" log.error(error_msg) raise utils.InputValidationException(message=error_msg) if deviceName: utils.validate_input_type(deviceName, str) if keys: utils.validate_input_type(keys, list) for key in keys: utils.validate_input_type(key, str) # If the code got to this point, I either have a valid entityId/entityType pair or a deviceName. Check if only the deviceName was provided and retrieve the entityId/entityType from it if deviceName and (not entityType or not entityId): # Get the entityId and entityType from the deviceName provided device_data = mysql_device_controller.get_device_credentials( device_name=deviceName) # Check if the previous statement returned a non-empty (not None) result. If that is the case, either the device is not (yet) configured in the device table or the table needs to be updated if not device_data: error_msg = "Cannot retrieve a pair of entityId/entityType from the device name provided: {0}. Either:" \ "\n1. The device is not yet configured in the database/ThingsBoard platform" \ "\n2. The MySQL device table needs to be updated." \ "\nCannot continue for now".format(str(deviceName)) log.error(error_msg) raise utils.InputValidationException(message=error_msg) # The previous method returns a 3-element tuple in the format (entityType, entityId, timeseriesKeys). Grab the relevant data straight from it entityType = device_data[0] entityId = device_data[1] # Validation complete. I have all I need to execute the remote call service_endpoint = "/api/plugins/telemetry/{0}/{1}/values/attributes".format( str(entityType), str(entityId)) # If a list of keys was provided, concatenate them to the current endpoint if keys: service_endpoint += "?keys=" # Add all the keys to the endpoint concatenated in a single, comma separated (without any spaces in between) string service_endpoint += ",".join(keys) # Build the service dictionary from the endpoint already built service_dict = utils.build_service_calling_info( mac.get_auth_token(user_type='tenant_admin'), service_endpoint=service_endpoint) # Query the remote API try: response = requests.get(url=service_dict['url'], headers=service_dict['headers']) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout) as ce: error_msg = "Could not get a response from {0}...".format( str(service_dict['url'])) log.error(error_msg) raise utils.ServiceEndpointException(message=ce) # If a response was returned, check the HTTP return code if response.status_code != 200: error_msg = "Request not successful: Received an HTTP " + str( eval(response.text)['status']) + " with message: " + str( eval(response.text)['message']) log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) else: # Got a valid result. Format the returned objects for return data_to_return = eval(utils.translate_postgres_to_python( response.text)) if len(data_to_return) is 0: # Nothing to return then. Send back a None instead return None # If the request was alright, I've received the following Response Body (after eval) # data_to_return = # [ # { # "lastUpdateTs": int, # "key": str, # "value": str # }, # ... # { # "lastUpdateTs": int, # "key": str, # "value": str # } # ] # # So I need to transform this into the return structure defined above attribute_dictionary = {} for attribute_pair in data_to_return: # Use this opportunity to filter out any attribute returned that is not part of the measurement list desired if attribute_pair['value'] not in proj_config.ontology_names: # If the attribute value is not one in the 'official list of names', skip it continue # Create the entries defined in the man entry of this method from the list elements returned from the remote API attribute_dictionary[ attribute_pair['key']] = attribute_pair['value'] # All done. Return the attributes dictionary return attribute_dictionary
def getTimeseries(device_name, end_date, start_date=None, time_interval=None, interval=None, limit=100, agg=None, timeseries_keys_filter=None): """This method is the real deal, at least to establish a base methodology to retrieve hard data from the remote API server. Unlike other API based methods so far, this one requires some data to be present in the MySQL server already because that is where the actual method call input data is going to come from. The remote API service that retrieves the requested data requires 5 mandatory elements (the optional arguments are explicit in the calling signature of this method where they are set to their default values already, in case that down the line there is a need to use them): entityType, entityId, keys, startTs and endTs. The first 3 parameters are going to be retrieved with a call to the MySQL thingsboard_devices_table and the timestamp ones are going to be determined from the triplet start_time (mandatory), ent_time and time_interval (only one of these is required). The method returns a dictionary with a list of timestamp, value pairs that can or cannot be limited by the limit value @:type user_types allowed for this service: TENANT_ADMIN, CUSTOMER_USER @:param device_name (str) - The name of the device to retrieve data from (e.g., 'Thermometer A-1', 'Water Meter A-1', etc... whatever the string used when registering the device in the ThingsBoard system). This value is certainly easier to retained and/or memorized from the user than the id string, for instance. @:param end_date (datetime.datetime) - A datetime.datetime object, i.e., in the format YYYY-MM-DD hh:mm:ss but that belongs to the datetime.datetime class. This is the latest value of the interval and, to avoid invalid dates into the input ( like future dates and such) this one is mandatory. The interval to be considered is going to be defined by either start_time (earliest) -> end_time (latest) or end_time - time_interval (in seconds) -> end_time, but one of the next two input arguments has to be provided. @:param start_date (datetime.datetime) - A datetime.datetime object delimiting the earliest point of the time interval for data retrieval @:param time_interval (int) - An interval, in seconds, to be subtracted to the end_time datetime object in order to define the time window to return data from @:param interval (int) - This is an OPTIONAL API side only parameter whose use still eludes me... so far I've tried to place calls to the remote service with all sorts of values in this field and I'm still to discover any influence of it in the returned results. NOTE: My initial assumption was it to be able to be set as a API side version of my time_interval. Yet, that is not the case because the API requires both the end and start timestamps to be provided by default. @:param limit (int) - The number of results to return in the request. Device data can be quite a lot to process and that's why this parameter, though optional, is set to 100 by default. Two things with this value: though the API doesn't explicitly says so, it doesn't like limit <= 0. It doesn't return an error per se but instead the service gets stuck until eventually an HTTP 503 - Service Unavailable is thrown instead. As such I'm going to validate this input accordingly. Also, unlike other limit parameters so far, there's no indication in the response structure if the number of results returned were limited by this value or by the time interval defined. To provide the user with more helpful information in this regard, this method is going to count the number of returned results and, if they do match the limit value provided, warn the user about it. @:param agg (str) - No idea what this one does too... The API testing interface has it set to NONE by default, though it is an optional parameter whose effect on the returned results is still yet to be understood. ALl I know so far is that the remote API expects a string on it @:param timeseries_keys_filter (list of str) - A list with strings with the keys to be returned from the remote API. Some devices contain multiple sensors, which means that there are going to be multiple records from different variables under the same device ID. To limit the returned results to a sub set of all parameters, provide a list in this argument with the correct names to limit the entries to be returned. Omitting this parameter (which defaults to None) returns all timeseries keys under the provided device ID @:return result_list (list of tuple) - The returned results are going to be processed and returned as a list of 2 element-tuples: a timestamp and the associated value for the timeseriesKey provided. @:raise utils.InputValidationException - If any of the inputs provided fails validation @:raise utils.ServiceEndpointException - If something goes wrong with any of the external service calls to the remote API executed in the method @:raise mysql_utils.MySQLDatabaseException - For errors derived from the MySQL database accesses @:raise Exception - For any other detected errors during the method's execution """ timeseries_log = ambi_logger.get_logger(__name__) # Before moving forward, check if at least one of the start_time, time_interval inputs was provided. NOTE: If both inputs are present, i.e., not None, the method validates both and if both are valid it prioritizes start_time over # time_interval. If one of them happens to be invalid, the method execution is not stopped but the user gets warned (through the logger) about this and how the method is going to be operated. But at this stage, I'm only moving forward if I # have the conditions to setup a valid time window for the API request if not start_date and not time_interval: error_msg = "Please provide at least one valid start_time (datetime.datetime) or a time_interval (int). Cannot compute a time window for data retrieval otherwise.." timeseries_log.error(error_msg) raise utils.InputValidationException(message=error_msg) # Time for validate inputs utils.validate_input_type(device_name, str) utils.validate_input_type(end_date, datetime.datetime) # Limit is OPTIONAL but, because of what is explained in this method's man entry, I need this value to be sort of mandatory. This verification, given that the input is already set with a decent default value, is just to protect the # method's execution against a user setting it to None by whatever reason that may be utils.validate_input_type(limit, int) if start_date: utils.validate_input_type(start_date, datetime.datetime) if time_interval: utils.validate_input_type(time_interval, int) if interval: utils.validate_input_type(interval, int) if agg: utils.validate_input_type(agg, str) # Validate the argument against the list type if timeseries_keys_filter: utils.validate_input_type(timeseries_keys_filter, list) if len(timeseries_keys_filter) <= 0: timeseries_log.warning( "Invalid timeseries keys filter provided: empty list. This filter is going to be ignored" ) timeseries_keys_filter = None else: # And each of its elements against the expected str type for timeseries in timeseries_keys_filter: utils.validate_input_type(timeseries, str) # Data type validation done. Now for the functional validations error_msg = None if limit <= 0: error_msg = "Invalid limit value: {0}. Please provide a greater than zero integer for this argument.".format( str(limit)) elif end_date > datetime.datetime.now(): error_msg = "Invalid end_date date provided: {0}! The date hasn't happen yet (future date). Please provide a valid datetime value!".format( str(end_date)) elif start_date and not time_interval and start_date >= end_date: error_msg = "Invalid start_date date! The start_date provided ({0}) is newer/equal than/to the end_date date ({1}): invalid time window defined!".format( str(start_date), str(end_date)) elif time_interval and not end_date and time_interval <= 0: error_msg = "Invalid time interval ({0})! Please provide a greater than zero value for this argument (the number of seconds to subtract from end_date).".format( str(time_interval)) elif start_date and time_interval and start_date >= end_date and time_interval <= 0: error_msg = "Both start_date and time_interval arguments provided are invalid!\nThe start_date provided ({0}) is newer than the end_date indicated ({1}) and the time_interval as an invalid value ({2}).\n" \ "Please provide a valid (older) start_date or a valid (greater than 0) time_interval".format(str(start_date), str(end_date), str(time_interval)) if error_msg: timeseries_log.error(error_msg) raise utils.InputValidationException(message=error_msg) # And now for the cases where both valid start_time and time_interval were provided. The previous validation bundle made sure that, if only one of these two parameters was provided, it was valid. If I got to this point I can have both of these # parameter set to valid inputs but I need to warn the user that I'm only going to use one to define the time window if start_date and time_interval: timeseries_log.warning( "Both start_time and time_interval provided arguments are valid but only start_time is going to be considered moving on. Set this argument to None/Invalid to use the time_interval instead" ) # So, if I'm dropping the time_interval, I need to signal this somehow moving forward: time_interval = None # Retrieve the device's credentials using the appropriate method device_cred = mysql_device_controller.get_device_credentials( device_name=device_name) # Check if a valid set of credentials was found if device_cred is None: error_msg = "Unable to retrieve a set of valid credentials to device '{0}'".format( str(device_name)) timeseries_log.error(error_msg) raise mysql_utils.MySQLDatabaseException(message=error_msg) # The first 3 elements that I need to build the service endpoint are valid and retrieved. Lets deal with the time window then. The service endpoint requires that the limits of this window (startTs, endTs) to be passed in that weird POSIX # timestamp-like format that the ThingsBoard PostGres database adopted, i.e, a 13 digit number with no decimal point (10 digits for the integer part + 3 for the microseconds value... but with the decimal point omitted...). Fortunately I've # written the 'translate' functions already for this situation end_ts = mysql_utils.convert_datetime_to_timestamp_tb(end_date) # If the other end is defined by the start_time datetime.datetime object if start_date: # Easy start_ts = mysql_utils.convert_datetime_to_timestamp_tb(start_date) # If I got to this point in the code, given the brutality of validations undertaken so far, I can only get here with start_time = None and something valid in time_interval. Proceed accordingly else: # I need to convert this interval to a timedelta object to be able to subtract it to the end_time one time_interval = datetime.timedelta(seconds=int(time_interval)) start_time = end_date - time_interval start_ts = mysql_utils.convert_datetime_to_timestamp_tb(start_time) # Done with the validations. Start building the service endpoint then. service_endpoint = "/api/plugins/telemetry/" + str( device_cred[0]) + "/" + str(device_cred[1]) + "/values/timeseries?" url_elements = [] if interval: url_elements.append("interval=" + str(interval)) url_elements.append("limit=" + str(limit)) if agg: url_elements.append("agg=" + str(agg)) # The element in result[2] can be a string containing multiple timeseries keys (if the device in question is a multisensor one). If a timeseries filter was provided, it is now time to apply it to reduce the number of variable types returned if timeseries_keys_filter: # Grab the original string list to a single variable device_ts_keys_list = str(device_cred[2]) valid_keys = [] for timeseries_key in timeseries_keys_filter: # And now check if any of the elements passed in the filter list is in the initial list if timeseries_key in device_ts_keys_list: # Add it to the valid keys list if so valid_keys.append(timeseries_key) # Otherwise warn the user of the mismatch else: timeseries_log.warning( "The filter key '{0}' provided in the filter list is not a valid timeseries key. Ignoring it..." .format(str(timeseries_key))) # If the last loop didn't yield any valid results, warn the user and default to the original string list if not len(valid_keys): timeseries_log.warning( "Unable to apply timeseries key filter: none of the provided keys had a match. Defaulting to {0}..." .format(str(device_ts_keys_list))) valid_keys = device_ts_keys_list else: # And inform the user of the alteration timeseries_log.info( "Valid filter found. Running remote API query with keys: {0}". format(str(valid_keys))) url_elements.append("keys=" + ",".join(valid_keys)) else: # No filters required. Append the full timeseries elements then url_elements.append("keys=" + ",".join(device_cred[2])) url_elements.append("startTs=" + str(start_ts)) url_elements.append("endTs=" + str(end_ts)) # Done. Now mash up the whole thing into a '&' separated string service_endpoint += "&".join(url_elements) # I'm finally ready to query the remote endpoint. This service requires a REGULAR type authorization token service_dict = utils.build_service_calling_info( mac.get_auth_token(user_type='tenant_admin'), service_endpoint) try: response = requests.get(url=service_dict['url'], headers=service_dict['headers']) except (requests.exceptions.ConnectionError, requests.exceptions.ConnectTimeout): error_msg = "Unable to establish a connection with {0}...".format( str(service_dict['url'])) timeseries_log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) # Check first if the response came back with a HTTP 200 if response.status_code != 200: error_msg = "Request unsuccessful: Received HTTP {0} with message: {1}".format( str(response.status_code), str(eval(response.text)['message'])) timeseries_log.error(error_msg) raise utils.ServiceEndpointException(message=error_msg) else: # The results are going to be returned as a dictionary of dictionaries in the following format: # result_dict = { # "timeseries_key_1": [ # {'ts': int, 'value': str}, # {'ts': int, 'value': str}, # ... # {'ts': int, 'value': str} # ], # "timeseries_key_2": [ # {'ts': int, 'value': str}, # {'ts': int, 'value': str}, # ... # {'ts': int, 'value': str} # ], # ... # "timeseries_key_N": [ # {'ts': int, 'value': str}, # {'ts': int, 'value': str}, # ... # {'ts': int, 'value': str} # ] # } # Use this as a reference for when another method needs to consume data from this response. Its a over complicated structure, honestly, and its not hard to create a simple method to call after this to simplify it greatly. But there's no # point in doing that until we know exactly what is the format that need to be returned. # Apply the 'eval' base method just to transform the str that is returned into a dict result_dict = eval(response.text) # Finally, check if any of the entries in the returned dictionary matches the 'limit' parameter and warn the user of potential missing results if so for result_key in list(result_dict.keys()): if len(result_dict[result_key]) == limit: timeseries_log.warning( "Timeseries key '{0}' results were limited by the 'limit' parameter: got {1} valid results back" .format(str(result_key), str(limit))) # Return the result dictionary finally return result_dict