# # **DeepRacer for Dummies/ARCC local training** # Those two setups come with a container that runs Jupyter Notebook (as you noticed if you're using one of them and reading this text). Logs are stored in `/logs/` and you just need to point at the latest file to see the current training. The logs are split for long running training if they exceed 500 MB. The log loading method has been extended to support that. # # **Chris Rhodes' repo** # Chris repo doesn't come with logs storage out of the box. I would normally run `docker logs dr > /path/to/logfile` and then load the file. # # Below I have prepared a section for each case. In each case you can analyse the logs as the training is being run, just in case of the Console you may need to force downloading of the logs as the `cw.download_log` method has a protection against needless downloads. # # Select your preferred way to get the logs below and you can get rid of the rest. # + # AWS DeepRacer Console stream_name = 'sim-sample' ## CHANGE This to your simulation application ID fname = 'logs/deepracer-%s.log' %stream_name # The log will be downloaded into the specified path cw.download_log(fname, stream_prefix=stream_name) # add force=True if you downloaded the file before but want to repeat # DeepRacer for Dummies / ARCC repository - comment the above and uncomment # the lines below. They rely on a magic command to list log files # ordered by time and pick up the most recent one (index zero). # If you want an earlier file, change 0 to larger value. # # !ls -t /workspace/venv/logs/*.log # fname = !ls -t /workspace/venv/logs/*.log # fname = fname[0] # Chris Rhodes' repository # Use a preferred way of saving the logs to a file , then set an fname value to load it # fname = /path/to/your/log/file # -
lap_df.loc[:, 'time'] = lap_df['timestamp'].astype( float) - lap_df['timestamp'].shift(1).astype(float) lap_df.loc[:, 'speed'] = lap_df['distance'] / (100 * lap_df['time']) lap_df.loc[:, 'acceleration'] = (lap_df['distance'] - lap_df['distance'].shift(1)) / lap_df['time'] lap_df.loc[:, 'progress_delta'] = lap_df['progress'].astype( float) - lap_df['progress'].shift(1).astype(float) lap_df.loc[:, 'progress_delta_per_time'] = lap_df['progress_delta'] / lap_df[ 'time'] pu.plot_grid_world(lap_df, track, graphed_value='reward') # - # ## Evaluation Run Analysis # # Debug your evaluation runs or analyze the laps. By providing the evaluation simulation id you can fetch a single log file and use it. You can do the same for race submission but I recommend using the bulk solution above. If you still want to do it, make sure to add `log_group = "/aws/robomaker/leaderboard/SimulationJobs"` to `download_log` call. eval_sim = 'sim-sample' eval_fname = 'logs//deepracer-eval-%s.log' % eval_sim cw.download_log(eval_fname, stream_prefix=eval_sim) # !head $eval_fname eval_df = slio.load_pandas(eval_fname) eval_df.head() # ### Grid World Analysis # The code below visualises laps from a single log file just like the one above visualises it in bulk for many. eu.analyse_single_evaluation(eval_df, track)