import pandas as pd from sklearn.neural_network import MLPRegressor from sklearn.model_selection import train_test_split from sklearn.metrics import r2_score # Load housing prices dataset dataset = pd.read_csv('housing.csv') # Define features and target variable X = dataset.drop('price', axis=1) y = dataset['price'] # Split dataset into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Build MLP regression model model = MLPRegressor(hidden_layer_sizes=(100, 50), activation='relu', solver='adam', max_iter=500, random_state=42) model.fit(X_train, y_train) # Score model on testing set score = model.score(X_test, y_test) r2 = r2_score(y_test, model.predict(X_test)) # Print scores print('R2 score: %.2f' % score) print('r2_score: %.2f' % r2)In this example, the MLPRegressor is used to build a neural network with two hidden layers of 100 and 50 neurons, respectively. The ‘relu’ activation function is used for the hidden layers and the ‘adam’ solver is used for optimization. The max_iter parameter specifies the maximum number of iterations for the solver. The model is trained and fit on the training dataset, and then scored on the testing dataset using the score function. The r2_score function is also used to calculate the R² score. The output prints the R² score of the model. The `sklearn` package library is used in this example, which is a machine learning library for Python that provides a range of supervised and unsupervised learning algorithms.