from multiprocessing import Pool def square(num): return num*num if __name__ == "__main__": myList = [1, 2, 3, 4, 5] pool = Pool() result = pool.map(square, myList) print(result)
import requests from multiprocessing import Pool def download(url): r = requests.get(url) return r.status_code if __name__ == "__main__": urls = ["https://www.google.com", "https://www.youtube.com", "https://www.facebook.com"] pool = Pool() result = pool.map(download, urls) print(result)In this example, we are using the `requests` library to download the content of multiple URLs simultaneously. The `download` function takes a URL as input, downloads its content using `requests`, and returns the status code of the HTTP response. The `Pool` class is used to create a pool of worker processes, which are then used to execute the `download` function for each URL in the `urls` list. The results are then stored in a `result` variable and printed. Both examples use the `multiprocessing` package in Python to spawn multiple worker processes and run functions in parallel.