import requests from bs4 import BeautifulSoup url = 'https://www.example.com/' response = requests.get(url) content = response.content soup = BeautifulSoup(content, 'html.parser') title = soup.title.text print(title)
import requests from bs4 import BeautifulSoup url = 'https://www.example.com/' response = requests.get(url) content = response.content soup = BeautifulSoup(content, 'html.parser') links = [] for link in soup.find_all('a'): links.append(link.get('href')) print(links)In both examples, we import the `requests` library to make a HTTP request to the webpage and get its content. Then, we use BeautifulSoup to parse the HTML content and extract the necessary information. The package library used is `bs4` which is a key part of BeautifulSoup.