-
Notifications
You must be signed in to change notification settings - Fork 0
CMSC122GroupProject/CMSC122_Group_Project
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
INSTRUCTIONS: Assuming your directory matches ours, in terms of locations of modules and files, you should do the following: INSTALLATION: You'll need to install the googlemaps-services-python module, because often use googlemaps package. We did this by git cloning from the following git page: https://github.com/googlemaps/google-maps-services-python. In order to create a fresh restaurants database in place of the one provided, NLTK must be installed. In addition, the WordNet corpus must be downloaded, which can be done using the command : nltk.download() and then selecting all packages. Since our project runs on Django, please make sure you have the most up-to-date version of it. PATHS: It is imperative that the locations of the modules and files are in the right location with respect to each other. In particular, asterisk_pre_algo.py, models.py, and views.py in the restaurants directory all call upon functions from each other, googlemaps, and movies.py (among others). Thus, please make sure your directory looks like ours COMMAND LINE: -In the command line of the asterisk_site directory, run "python manage.py runserver". In the terminal, a link will appear. Click on this link to begin using the site IMPORTANT COMMENTS: -so that the code runs faster, we limited the SQL query to 5 restaurant results. This can easily be changed by altering the prelim-assembly function in asterisk_pre_algo.py -usage of the actually Django site, including how to correctly fill out the form and navigating between pages, is clearly detailed within the website -in case you'd like to access the admin site of the interface, you'll need- username: asudit password: Anightonthetown -the final output of "Aster*sk Experience" is a series of most efficient paths to get to the final location. This is why you may observe a path that ends with hours to spare in the schedule (because going to an additional restaurant would not be the most efficient path to arrive at that restaurant by the end of your travel). GENERAL INTUITION FOR HOW CODE WORKS: The functions of key files have descriptions of what the functions do, but we outline this for you here as well: -The first portion of our code is a basic yelp scraper. It starts on a certain search page (in our case, we used Hyde Park Restaurants) and gathers all urls of every business. Using all these urls, it goes from business to business, and if the business's yelp page has all necessary details (e.g. a restaurant without hours does not help us), we scrape their data directly from the html through requests/BeautifulSoup. The result is a dictionary, which we json.dump. -The resulting json file scraped from yelp is processed into a database using Yelp/sql_maker.py. This file incorporates the Wordnet corpus of NLTK in order to create the column of the "yelp" table that contains synonyms of words found from reviews. -Flixster/Fandango data are scraped using asterisk_site/restaurants/movies.py. The scraper is straightforward, requiring only a zip code and a day of the week. -Once the YELP data has been processed into a database within the asterisk_site directory, asterisk_pre_algo.py takes a sample_dictionary that constains the search parameters of the user's query and generates a SQL query. This query is then run through the database. A list of restaurants with descriptors like price and ratings is then returned. This is more or less what the table on the Restaurant Queries page contains. This output is rendered by processing the query and calling the asterisk_algorithm in the views page. The views page calls upon a HTML template that dictates what the Restaurants Query page will look like. -The PLANNER app (aster*sk experience) is processed as follows: the sample_dictionary is similarly processed, leading a list of restaurants. This list of restaurants also included a longitude and latitude for each restaurant, since we need this for our least cost-algorithm. The query object generated by the form that was used to make the sample-dict is also included, since the least-cost algorithm needs information about the user's query too. So the new list of restaurants and the query object are fed into the GO function from movies.py. The go function take this output and constructs a graph, linking all restaurants with the home. And, using the home data to find nearby movie theatres, we also generate movie nodes to insert into the graph. Now that we have a graph representing all possibilities throughout the night (assuming the user does not want to go from restaurant-to-restaurant or movie-to-movie), we can recursively solve for the most 'efficient' path (which we define as a path that maximizes the amount of eating / watching movies along it). The efficient paths are outputted as a list of tuples, which is fed back to the website. This all happens within the movies function in views.py. After processing, this same function then generates a table of events, in chronological order from left to right. Of course, the planner is rendered based on its own HTML page in the templates sub-folder of the restaurants directory. -The forms page has its own HTML template as well. A form within forms.py in the restaurants directory outlines what the form should look like. This is then transferred to the forms views function in views.py (called Dine_query_new). The form object is then turned for all practical purposes into a dictionary which is read into its own HTML template within the templates module. Thus everything is connected. -Of course, the query object returned from the forms page is an instance of the dine_query object defined in models.py within the restaurants directory.
About
Adam's, Ken's, and Brandon's immortalizing quest for fame
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published