def delete(pipeline: IPipeline): if get_pipeline_status(pipeline) in [ IPipeline.STATUS_RUNNING, IPipeline.STATUS_RETRY ]: stop(pipeline) _client(pipeline).delete_pipeline(pipeline.get_id()) pipeline.delete_streamsets()
def _client(pipeline: IPipeline) -> _StreamSetsApiClient: if not pipeline.get_streamsets(): raise StreamsetsException( f'Pipeline `{pipeline.get_id()}` does not belong to any StreamSets' ) if pipeline.get_streamsets().get_id() not in _clients: _clients[pipeline.get_streamsets().get_id()] = _StreamSetsApiClient( pipeline.get_streamsets()) return _clients[pipeline.get_streamsets().get_id()]
def stop(pipeline: IPipeline): inject.instance(ILogger).info( f'Stopping the pipeline `{pipeline.get_id()}`') client = _client(pipeline) client.stop_pipeline(pipeline.get_id()) try: client.wait_for_status(pipeline.get_id(), IPipeline.STATUS_STOPPED) except PipelineFreezeException: inject.instance(ILogger).info( f'Force stopping the pipeline `{pipeline.get_id()}`') force_stop(pipeline)
def _move(self, pipeline: IPipeline, to_streamsets: IStreamSets): self.logger.info( f'Moving `{pipeline.get_id()}` from `{pipeline.get_streamsets().get_url()}` to `{to_streamsets.get_url()}`' ) should_start = client.get_pipeline_status(pipeline) in [ IPipeline.STATUS_STARTING, IPipeline.STATUS_RUNNING ] client.delete(pipeline) pipeline.set_streamsets(to_streamsets) client.create(pipeline) self.pipeline_provider.save(pipeline) if should_start: client.start(pipeline) self.streamsets_pipelines[to_streamsets].append(pipeline)
def start(pipeline: IPipeline, wait_for_sending_data: bool = False): client = _client(pipeline) client.start_pipeline(pipeline.get_id()) client.wait_for_status(pipeline.get_id(), IPipeline.STATUS_RUNNING) inject.instance(ILogger).info(f'{pipeline.get_id()} pipeline is running') if wait_for_sending_data: try: if _wait_for_sending_data(pipeline): inject.instance(ILogger).info( f'{pipeline.get_id()} pipeline is sending data') else: inject.instance(ILogger).info( f'{pipeline.get_id()} pipeline did not send any data') except PipelineException as e: inject.instance(ILogger).error(str(e))
def force_stop(pipeline: IPipeline): client = _client(pipeline) try: # todo IT SHOULD FORCE STOP client.stop_pipeline(pipeline.get_id()) except ApiClientException: pass status = get_pipeline_status(pipeline) if status == IPipeline.STATUS_STOPPED: return if status != IPipeline.STATUS_STOPPING: raise PipelineException( "Can't force stop a pipeline not in the STOPPING state") client.force_stop(pipeline.get_id()) client.wait_for_status(pipeline.get_id(), IPipeline.STATUS_STOPPED)
def get_pipeline_info(pipeline: IPipeline, number_of_history_records: int) -> dict: client = _client(pipeline) status = client.get_pipeline_status(pipeline.get_id()) metrics = json.loads( status['metrics']) if status['metrics'] else get_pipeline_metrics( pipeline) pipeline_info = client.get_pipeline(pipeline.get_id()) history = client.get_pipeline_history(pipeline.get_id()) return { 'status': '{status} {message}'.format(**status), 'metrics': _get_metrics_string(metrics), 'metric_errors': _get_metric_errors(pipeline, metrics), 'pipeline_issues': _extract_pipeline_issues(pipeline_info), 'stage_issues': _extract_stage_issues(pipeline_info), 'history': _get_history_info(history, number_of_history_records) }
def get_pipeline_logs(pipeline: IPipeline, severity: Optional[Severity], number_of_records: int) -> list: client = _client(pipeline) if severity is not None: severity = severity.value return _transform_logs( client.get_pipeline_logs(pipeline.get_id(), severity)[:number_of_records])
def create(pipeline: IPipeline): # todo remove this if check and make streamsets mandatory after that fix todos above if not pipeline.get_streamsets(): pipeline.set_streamsets( balancer.least_loaded_streamsets( balancer.get_streamsets_pipelines())) try: _client(pipeline).create_pipeline(pipeline.get_id()) except ApiClientException as e: raise StreamsetsException(str(e)) try: _update_pipeline(pipeline, set_offset=True) except ApiClientException as e: delete(pipeline) raise StreamsetsException(str(e)) except Exception: delete(pipeline) raise
def _get_metric_errors(pipeline: IPipeline, metrics: Optional[dict]) -> list: errors = [] if metrics: for name, counter in metrics['counters'].items(): stage_name = re.search(r'stage\.(.+)\.errorRecords\.counter', name) if counter['count'] == 0 or not stage_name: continue for error in _client(pipeline).get_pipeline_errors( pipeline.get_id(), stage_name.group(1)): errors.append( f'{_format_timestamp(error["header"]["errorTimestamp"])} - {error["header"]["errorMessage"]}' ) return errors
def _update_pipeline(pipeline: IPipeline, set_offset=False): client = _client(pipeline) if set_offset and pipeline.get_offset(): client.post_pipeline_offset(pipeline.get_id(), json.loads(pipeline.get_offset())) config = pipeline.get_streamsets_config() config['uuid'] = client.get_pipeline(pipeline.get_id())['uuid'] return client.update_pipeline(pipeline.get_id(), config)
def get_pipeline_offset(pipeline: IPipeline) -> Optional[str]: res = _client(pipeline).get_pipeline_offset(pipeline.get_id()) if res: return json.dumps(res) return None
def validate(pipeline: IPipeline): return _client(pipeline).validate(pipeline.get_id())
def wait_for_preview(pipeline: IPipeline, preview_id: str) -> (list, list): return _client(pipeline).wait_for_preview(pipeline.get_id(), preview_id)
def get_pipeline_metrics(pipeline: IPipeline) -> dict: return _client(pipeline).get_pipeline_metrics(pipeline.get_id())
def reset(pipeline: IPipeline): _client(pipeline).reset_pipeline(pipeline.get_id())
def create_preview(pipeline: IPipeline) -> dict: return _client(pipeline).create_preview(pipeline.get_id())
def get_pipeline_status(pipeline: IPipeline) -> str: return _client(pipeline).get_pipeline_status(pipeline.get_id())['status']