def main(args): config = load_config(args.config) check_classes(config) index = [ i for i in (list(range(len(config["classes"])))) if config["classes"][i]["title"] == args.type ] if not index: sys.exit( "ERROR: Requested type {} not found among classes title in the config file." .format(args.type)) print("RoboSat.pink - vectorize {} from {}".format(args.type, args.masks)) with open(args.out, "w", encoding="utf-8") as out: first = True out.write('{"type":"FeatureCollection","features":[') for tile, path in tqdm(list(tiles_from_slippy_map(args.masks)), ascii=True, unit="mask"): try: features = (np.array(Image.open(path).convert("P"), dtype=np.uint8) == index).astype(np.uint8) try: C, W, H = features.shape except: W, H = features.shape transform = rasterio.transform.from_bounds( (*mercantile.bounds(tile.x, tile.y, tile.z)), W, H) for shape, value in rasterio.features.shapes( features, transform=transform): prop = '"properties":{{"x":{},"y":{},"z":{}}}'.format( int(tile.x), int(tile.y), int(tile.z)) geom = '"geometry":{{"type": "Polygon", "coordinates":{}}}'.format( json.dumps(shape["coordinates"])) out.write('{}{{"type":"Feature",{},{}}}'.format( "," if not first else "", geom, prop)) first = False except: sys.exit("ERROR: Unable to vectorize tile {}.".format( str(tile))) out.write("]}")
def main(args): if not args.workers: args.workers = max(1, math.floor(os.cpu_count() * 0.5)) if args.label: config = load_config(args.config) check_classes(config) colors = [classe["color"] for classe in config["classes"]] palette = make_palette(*colors) splits_path = os.path.join(os.path.expanduser(args.out), ".splits") tiles_map = {} print("RoboSat.pink - tile on CPU, with {} workers".format(args.workers)) bands = -1 for path in args.rasters: try: raster = rasterio_open(path) w, s, e, n = transform_bounds(raster.crs, "EPSG:4326", *raster.bounds) except: sys.exit("Error: Unable to load raster {} or deal with it's projection".format(args.raster)) if bands != -1: assert bands == len(raster.indexes), "Coverage must be bands consistent" bands = len(raster.indexes) tiles = [mercantile.Tile(x=x, y=y, z=z) for x, y, z in mercantile.tiles(w, s, e, n, args.zoom)] for tile in tiles: tile_key = (str(tile.x), str(tile.y), str(tile.z)) if tile_key not in tiles_map.keys(): tiles_map[tile_key] = [] tiles_map[tile_key].append(path) if args.label: ext = "png" bands = 1 if not args.label: if bands == 1: ext = "png" if bands == 3: ext = "webp" if bands > 3: ext = "tiff" tiles = [] progress = tqdm(total=len(tiles_map), ascii=True, unit="tile") # Begin to tile plain tiles with futures.ThreadPoolExecutor(args.workers) as executor: def worker(path): raster = rasterio_open(path) w, s, e, n = transform_bounds(raster.crs, "EPSG:4326", *raster.bounds) transform, _, _ = calculate_default_transform(raster.crs, "EPSG:3857", raster.width, raster.height, w, s, e, n) tiles = [mercantile.Tile(x=x, y=y, z=z) for x, y, z in mercantile.tiles(w, s, e, n, args.zoom)] tiled = [] for tile in tiles: try: w, s, e, n = mercantile.xy_bounds(tile) # inspired by rio-tiler, cf: https://github.com/mapbox/rio-tiler/pull/45 warp_vrt = WarpedVRT( raster, crs="epsg:3857", resampling=Resampling.bilinear, add_alpha=False, transform=from_bounds(w, s, e, n, args.ts, args.ts), width=math.ceil((e - w) / transform.a), height=math.ceil((s - n) / transform.e), ) data = warp_vrt.read( out_shape=(len(raster.indexes), args.ts, args.ts), window=warp_vrt.window(w, s, e, n) ) image = np.moveaxis(data, 0, 2) # C,H,W -> H,W,C except: sys.exit("Error: Unable to tile {} from raster {}.".format(str(tile), raster)) tile_key = (str(tile.x), str(tile.y), str(tile.z)) if not args.label and len(tiles_map[tile_key]) == 1 and is_border(image): progress.update() continue if len(tiles_map[tile_key]) > 1: out = os.path.join(splits_path, str(tiles_map[tile_key].index(path))) else: out = args.out x, y, z = map(int, tile) if not args.label: ret = tile_image_to_file(out, mercantile.Tile(x=x, y=y, z=z), image) if args.label: ret = tile_label_to_file(out, mercantile.Tile(x=x, y=y, z=z), palette, image) if not ret: sys.exit("Error: Unable to write tile {} from raster {}.".format(str(tile), raster)) if len(tiles_map[tile_key]) == 1: progress.update() tiled.append(mercantile.Tile(x=x, y=y, z=z)) return tiled for tiled in executor.map(worker, args.rasters): if tiled is not None: tiles.extend(tiled) # Aggregate remaining tiles splits with futures.ThreadPoolExecutor(args.workers) as executor: def worker(tile_key): if len(tiles_map[tile_key]) == 1: return image = np.zeros((args.ts, args.ts, bands), np.uint8) x, y, z = map(int, tile_key) for i in range(len(tiles_map[tile_key])): root = os.path.join(splits_path, str(i)) _, path = tile_from_slippy_map(root, x, y, z) if not args.label: split = tile_image_from_file(path) if args.label: split = tile_label_from_file(path) split = split.reshape((args.ts, args.ts, 1)) # H,W -> H,W,C assert image.shape == split.shape image[:, :, :] += split[:, :, :] if not args.label and is_border(image): progress.update() return tile = mercantile.Tile(x=x, y=y, z=z) if not args.label: ret = tile_image_to_file(args.out, tile, image) if args.label: ret = tile_label_to_file(args.out, tile, palette, image) if not ret: sys.exit("Error: Unable to write tile {}.".format(str(tile_key))) progress.update() return tile for tiled in executor.map(worker, tiles_map.keys()): if tiled is not None: tiles.append(tiled) if splits_path and os.path.isdir(splits_path): shutil.rmtree(splits_path) # Delete suffixes dir if any if not args.no_web_ui: template = "leaflet.html" if not args.web_ui_template else args.web_ui_template base_url = args.web_ui_base_url if args.web_ui_base_url else "./" web_ui(args.out, base_url, tiles, tiles, ext, template)
def main(args): if (args.geojson and args.postgis) or (not args.geojson and not args.postgis): sys.exit( "ERROR: Input features to rasterize must be either GeoJSON or PostGIS" ) if args.postgis and not args.pg_dsn: sys.exit( "ERROR: With PostGIS input features, --pg_dsn must be provided") config = load_config(args.config) check_classes(config) palette = make_palette(*[classe["color"] for classe in config["classes"]], complementary=True) burn_value = 1 args.out = os.path.expanduser(args.out) os.makedirs(args.out, exist_ok=True) log = Logs(os.path.join(args.out, "log"), out=sys.stderr) def geojson_parse_polygon(zoom, srid, feature_map, polygon, i): try: if srid != 4326: polygon = [ xy for xy in geojson_reproject( { "type": "feature", "geometry": polygon }, srid, 4326) ][0] for i, ring in enumerate( polygon["coordinates"] ): # GeoJSON coordinates could be N dimensionals polygon["coordinates"][i] = [[ x, y ] for point in ring for x, y in zip([point[0]], [point[1]])] if polygon["coordinates"]: for tile in burntiles.burn([{ "type": "feature", "geometry": polygon }], zoom=zoom): feature_map[mercantile.Tile(*tile)].append({ "type": "feature", "geometry": polygon }) except ValueError: log.log("Warning: invalid feature {}, skipping".format(i)) return feature_map def geojson_parse_geometry(zoom, srid, feature_map, geometry, i): if geometry["type"] == "Polygon": feature_map = geojson_parse_polygon(zoom, srid, feature_map, geometry, i) elif geometry["type"] == "MultiPolygon": for polygon in geometry["coordinates"]: feature_map = geojson_parse_polygon(zoom, srid, feature_map, { "type": "Polygon", "coordinates": polygon }, i) else: log.log( "Notice: {} is a non surfacic geometry type, skipping feature {}" .format(geometry["type"], i)) return feature_map if args.geojson: try: tiles = [ tile for tile in tiles_from_csv(os.path.expanduser(args.cover)) ] zoom = tiles[0].z assert not [tile for tile in tiles if tile.z != zoom] except: sys.exit("ERROR: Inconsistent cover {}".format(args.cover)) feature_map = collections.defaultdict(list) log.log("RoboSat.pink - rasterize - Compute spatial index") for geojson_file in args.geojson: with open(os.path.expanduser(geojson_file)) as geojson: try: feature_collection = json.load(geojson) except: sys.exit("ERROR: {} is not a valid JSON file.".format( geojson_file)) try: crs_mapping = {"CRS84": "4326", "900913": "3857"} srid = feature_collection["crs"]["properties"][ "name"].split(":")[-1] srid = int(srid) if srid not in crs_mapping else int( crs_mapping[srid]) except: srid = int(4326) for i, feature in enumerate( tqdm(feature_collection["features"], ascii=True, unit="feature")): try: if feature["geometry"]["type"] == "GeometryCollection": for geometry in feature["geometry"]["geometries"]: feature_map = geojson_parse_geometry( zoom, srid, feature_map, geometry, i) else: feature_map = geojson_parse_geometry( zoom, srid, feature_map, feature["geometry"], i) except: sys.exit( "ERROR: Unable to parse {} file. Seems not a valid GEOJSON file." .format(geojson_file)) log.log( "RoboSat.pink - rasterize - rasterizing tiles from {} on cover {}". format(args.geojson, args.cover)) with open(os.path.join(os.path.expanduser(args.out), "instances.cover"), mode="w") as cover: for tile in tqdm(list( tiles_from_csv(os.path.expanduser(args.cover))), ascii=True, unit="tile"): try: if tile in feature_map: cover.write("{},{},{} {}{}".format( tile.x, tile.y, tile.z, len(feature_map[tile]), os.linesep)) out = geojson_tile_burn(tile, feature_map[tile], 4326, args.ts, burn_value) else: cover.write("{},{},{} {}{}".format( tile.x, tile.y, tile.z, 0, os.linesep)) out = np.zeros(shape=(args.ts, args.ts), dtype=np.uint8) tile_label_to_file(args.out, tile, palette, out) except: log.log("Warning: Unable to rasterize tile. Skipping {}". format(str(tile))) if args.postgis: try: pg_conn = psycopg2.connect(args.pg_dsn) pg = pg_conn.cursor() except Exception: sys.exit("Unable to connect PostgreSQL: {}".format(args.pg_dsn)) log.log( "RoboSat.pink - rasterize - rasterizing tiles from PostGIS on cover {}" .format(args.cover)) log.log(" SQL {}".format(args.postgis)) try: pg.execute( "SELECT ST_Srid(geom) AS srid FROM ({} LIMIT 1) AS sub".format( args.postgis)) srid = pg.fetchone()[0] except Exception: sys.exit("Unable to retrieve geometry SRID.") for tile in tqdm(list(tiles_from_csv(args.cover)), ascii=True, unit="tile"): s, w, e, n = mercantile.bounds(tile) raster = np.zeros((args.ts, args.ts)) query = """ WITH bbox AS (SELECT ST_Transform(ST_MakeEnvelope({},{},{},{}, 4326), {} ) AS bbox), bbox_merc AS (SELECT ST_Transform(ST_MakeEnvelope({},{},{},{}, 4326), 3857) AS bbox), rast_a AS (SELECT ST_AddBand( ST_SetSRID( ST_MakeEmptyRaster({}, {}, ST_Xmin(bbox), ST_Ymax(bbox), (ST_YMax(bbox) - ST_YMin(bbox)) / {}), 3857), '8BUI'::text, 0) AS rast FROM bbox_merc), features AS (SELECT ST_Union(ST_Transform(ST_Force2D(geom), 3857)) AS geom FROM ({}) AS sub, bbox WHERE ST_Intersects(geom, bbox)), rast_b AS (SELECT ST_AsRaster(geom, rast, '8BUI', {}) AS rast FROM features, rast_a WHERE NOT ST_IsEmpty(geom)) SELECT ST_AsBinary(ST_MapAlgebra(rast_a.rast, rast_b.rast, '{}', NULL, 'FIRST')) AS wkb FROM rast_a, rast_b """.format(s, w, e, n, srid, s, w, e, n, args.ts, args.ts, args.ts, args.postgis, burn_value, burn_value) try: pg.execute(query) row = pg.fetchone() if row: raster = np.squeeze(wkb_to_numpy(io.BytesIO(row[0])), axis=2) except Exception: log.log( "Warning: Invalid geometries, skipping {}".format(tile)) pg_conn = psycopg2.connect(args.pg_dsn) pg = pg_conn.cursor() try: tile_label_to_file(args.out, tile, palette, raster) except: log.log( "Warning: Unable to rasterize tile. Skipping {}".format( str(tile))) if not args.no_web_ui: template = "leaflet.html" if not args.web_ui_template else args.web_ui_template base_url = args.web_ui_base_url if args.web_ui_base_url else "./" tiles = [tile for tile in tiles_from_csv(args.cover)] web_ui(args.out, base_url, tiles, tiles, "png", template)
def main(args): config = load_config(args.config) args.out = os.path.expanduser(args.out) args.workers = torch.cuda.device_count() * 2 if torch.device( "cuda") and not args.workers else args.workers config["model"][ "loader"] = args.loader if args.loader else config["model"]["loader"] config["model"]["bs"] = args.bs if args.bs else config["model"]["bs"] config["model"]["lr"] = args.lr if args.lr else config["model"]["lr"] config["model"]["ts"] = args.ts if args.ts else config["model"]["ts"] config["model"]["nn"] = args.nn if args.nn else config["model"]["nn"] config["model"][ "loss"] = args.loss if args.loss else config["model"]["loss"] config["model"]["da"] = args.da if args.da else config["model"]["da"] config["model"]["dap"] = args.dap if args.dap else config["model"]["dap"] check_classes(config) check_channels(config) check_model(config) if not os.path.isdir(os.path.expanduser(args.dataset)): sys.exit("ERROR: dataset {} is not a directory".format(args.dataset)) log = Logs(os.path.join(args.out, "log")) if torch.cuda.is_available(): log.log("RoboSat.pink - training on {} GPUs, with {} workers".format( torch.cuda.device_count(), args.workers)) log.log("(Torch:{} Cuda:{} CudNN:{})".format( torch.__version__, torch.version.cuda, torch.backends.cudnn.version())) device = torch.device("cuda") torch.backends.cudnn.benchmark = True else: log.log("RoboSat.pink - training on CPU, with {} workers - (Torch:{})". format(args.workers, torch.__version__)) log.log( "WARNING: Are you really sure sure about not training on GPU ?") device = torch.device("cpu") try: loader = import_module("robosat_pink.loaders.{}".format( config["model"]["loader"].lower())) loader_train = getattr(loader, config["model"]["loader"])( config, config["model"]["ts"], os.path.join(args.dataset, "training"), "train") loader_val = getattr(loader, config["model"]["loader"])( config, config["model"]["ts"], os.path.join(args.dataset, "validation"), "train") except: sys.exit("ERROR: Unable to load data loaders") try: model_module = import_module("robosat_pink.models.{}".format( config["model"]["nn"].lower())) except: sys.exit("ERROR: Unable to load {} model".format( config["model"]["nn"])) nn = getattr(model_module, config["model"]["nn"])( loader_train.shape_in, loader_train.shape_out, config["model"]["pretrained"]).to(device) nn = torch.nn.DataParallel(nn) optimizer = Adam(nn.parameters(), lr=config["model"]["lr"]) resume = 0 if args.checkpoint: try: chkpt = torch.load(os.path.expanduser(args.checkpoint), map_location=device) nn.load_state_dict(chkpt["state_dict"]) log.log("Using checkpoint: {}".format(args.checkpoint)) except: sys.exit("ERROR: Unable to load {} checkpoint".format( args.checkpoint)) if args.resume: optimizer.load_state_dict(chkpt["optimizer"]) resume = chkpt["epoch"] if resume >= args.epochs: sys.exit( "ERROR: Epoch {} already reached by the given checkpoint". format(config["model"]["epochs"])) try: loss_module = import_module("robosat_pink.losses.{}".format( config["model"]["loss"].lower())) criterion = getattr(loss_module, config["model"]["loss"])().to(device) except: sys.exit("ERROR: Unable to load {} loss".format( config["model"]["loss"])) bs = config["model"]["bs"] train_loader = DataLoader(loader_train, batch_size=bs, shuffle=True, drop_last=True, num_workers=args.workers) val_loader = DataLoader(loader_val, batch_size=bs, shuffle=False, drop_last=True, num_workers=args.workers) log.log("--- Input tensor from Dataset: {} ---".format(args.dataset)) num_channel = 1 # 1-based numerotation for channel in config["channels"]: for band in channel["bands"]: log.log("Channel {}:\t\t {}[band: {}]".format( num_channel, channel["name"], band)) num_channel += 1 log.log("--- Hyper Parameters ---") for hp in config["model"]: log.log("{}{}".format(hp.ljust(25, " "), config["model"][hp])) for epoch in range(resume, args.epochs): UUID = uuid.uuid1() log.log("---{}Epoch: {}/{} -- UUID: {}".format(os.linesep, epoch + 1, args.epochs, UUID)) train(train_loader, config, log, device, nn, optimizer, criterion) validate(val_loader, config, log, device, nn, criterion) try: # https://github.com/pytorch/pytorch/issues/9176 nn_doc = nn.module.doc nn_version = nn.module.version except AttributeError: nn_version = nn.version nn_doc == nn.doc states = { "uuid": UUID, "model_version": nn_version, "producer_name": "RoboSat.pink", "producer_version": "0.4.0", "model_licence": "MIT", "domain": "pink.RoboSat", # reverse-DNS "doc_string": nn_doc, "shape_in": loader_train.shape_in, "shape_out": loader_train.shape_out, "state_dict": nn.state_dict(), "epoch": epoch + 1, "nn": config["model"]["nn"], "optimizer": optimizer.state_dict(), "loader": config["model"]["loader"], } checkpoint_path = os.path.join( args.out, "checkpoint-{:05d}.pth".format(epoch + 1)) try: torch.save(states, checkpoint_path) except: sys.exit( "ERROR: Unable to save checkpoint {}".format(checkpoint_path))
def main(args): config = load_config(args.config) check_channels(config) check_classes(config) args.workers = torch.cuda.device_count() * 2 if torch.device( "cuda") and not args.workers else args.workers log = Logs(os.path.join(args.out, "log")) if torch.cuda.is_available(): log.log("RoboSat.pink - predict on {} GPUs, with {} workers".format( torch.cuda.device_count(), args.workers)) log.log("(Torch:{} Cuda:{} CudNN:{})".format( torch.__version__, torch.version.cuda, torch.backends.cudnn.version())) device = torch.device("cuda") torch.backends.cudnn.benchmark = True else: log.log("RoboSat.pink - predict on CPU, with {} workers".format( args.workers)) device = torch.device("cpu") try: chkpt = torch.load(args.checkpoint, map_location=device) assert chkpt["producer_name"] == "RoboSat.pink" model_module = import_module("robosat_pink.models.{}".format( chkpt["nn"].lower())) nn = getattr(model_module, chkpt["nn"])(chkpt["shape_in"], chkpt["shape_out"]).to(device) nn = torch.nn.DataParallel(nn) nn.load_state_dict(chkpt["state_dict"]) nn.eval() except: sys.exit("ERROR: Unable to load {} checkpoint.".format( args.checkpoint)) log.log("Model {} - UUID: {}".format(chkpt["nn"], chkpt["uuid"])) try: loader_module = import_module("robosat_pink.loaders.{}".format( chkpt["loader"].lower())) loader_predict = getattr(loader_module, chkpt["loader"])(config, chkpt["shape_in"][1:3], args.tiles, mode="predict") except: sys.exit("ERROR: Unable to load {} data loader.".format( chkpt["loader"])) loader = DataLoader(loader_predict, batch_size=args.bs, num_workers=args.workers) palette = make_palette(config["classes"][0]["color"], config["classes"][1]["color"]) with torch.no_grad( ): # don't track tensors with autograd during prediction for images, tiles in tqdm(loader, desc="Eval", unit="batch", ascii=True): images = images.to(device) try: outputs = nn(images) probs = torch.nn.functional.softmax(outputs, dim=1).data.cpu().numpy() except: log.log("WARNING: Skipping batch:") for tile, prob in zip(tiles, probs): log.log(" - {}".format(str(tile))) continue for tile, prob in zip(tiles, probs): try: x, y, z = list(map(int, tile)) mask = np.around(prob[1:, :, :]).astype(np.uint8).squeeze() tile_label_to_file(args.out, mercantile.Tile(x, y, z), palette, mask) except: log.log("WARNING: Skipping tile {}".format(str(tile))) if not args.no_web_ui: template = "leaflet.html" if not args.web_ui_template else args.web_ui_template base_url = args.web_ui_base_url if args.web_ui_base_url else "./" tiles = [tile for tile, _ in tiles_from_slippy_map(args.out)] web_ui(args.out, base_url, tiles, tiles, "png", template)