from requests_html import HTMLSession session = HTMLSession() r = session.get('https://www.google.com/') print(r.html.links)
from requests_html import HTMLSession session = HTMLSession() r = session.get('https://www.wikipedia.org/') r.html.render() print(r.html.text)
from requests_html import HTMLSession session = HTMLSession() r = session.get('https://www.python.org/') links = r.html.find('.documentation-widget a') for link in links: print(link.text)Description: In this final example, we use an HTMLSession object to scrape the Python home page, but this time we use CSS selectors to find specific elements on the page. Specifically, we're looking for all the links in the documentation-widget section of the page, and we print out their text. This demonstrates the power of the requests_html library in terms of applying advanced parsing techniques. Overall, the examples above demonstrate the versatility and functionality of the requests_html library, which is packaged with requests library.