Housing Boston Prices Prediction

 · 18 mins read

DATA ANALYSIS - BOSTON HOUSING PRICES

Task 1. Loading the data

The first step before proceeding is to load the data, which is stored in the file housing.csv.

To do this, we are going to use the Pandas library.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

data = pd.read_csv("housing.csv",sep='\s+',names=["crim", "zn", "indus","chas","nox","rm","age","dis","rad","tax","ptratio","black","lstat","medv"])
data
crimzninduschasnoxrmagedisradtaxptratioblacklstatmedv
00.0063218.02.3100.5386.57565.24.09001296.015.3396.904.9824.0
10.027310.07.0700.4696.42178.94.96712242.017.8396.909.1421.6
20.027290.07.0700.4697.18561.14.96712242.017.8392.834.0334.7
30.032370.02.1800.4586.99845.86.06223222.018.7394.632.9433.4
40.069050.02.1800.4587.14754.26.06223222.018.7396.905.3336.2
.............................................
5010.062630.011.9300.5736.59369.12.47861273.021.0391.999.6722.4
5020.045270.011.9300.5736.12076.72.28751273.021.0396.909.0820.6
5030.060760.011.9300.5736.97691.02.16751273.021.0396.905.6423.9
5040.109590.011.9300.5736.79489.32.38891273.021.0393.456.4822.0
5050.047410.011.9300.5736.03080.82.50501273.021.0396.907.8811.9

506 rows × 14 columns

Task 2. Exploratory analysis

# DATA DIMENSIONS

print("Dimensions of the data:",data.shape)
print(data.shape[0], "instances.")
print(data.shape[1], "attributes.")

Dimensions of the data: (506, 14)
506 instances.
14 attributes.
data.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 506 entries, 0 to 505
Data columns (total 14 columns):
 #   Column   Non-Null Count  Dtype  
---  ------   --------------  -----  
 0   crim     506 non-null    float64
 1   zn       506 non-null    float64
 2   indus    506 non-null    float64
 3   chas     506 non-null    int64  
 4   nox      506 non-null    float64
 5   rm       506 non-null    float64
 6   age      506 non-null    float64
 7   dis      506 non-null    float64
 8   rad      506 non-null    int64  
 9   tax      506 non-null    float64
 10  ptratio  506 non-null    float64
 11  black    506 non-null    float64
 12  lstat    506 non-null    float64
 13  medv     506 non-null    float64
dtypes: float64(12), int64(2)
memory usage: 55.5 KB

Impressions: it can be seen that there are no missing null gaps in the attribute values as there are 506 instances and each of the attributes contains 506 values, all columns are numeric (including the class, being a regression problem).

data.describe()
crimzninduschasnoxrmagedisradtaxptratioblacklstatmedv
count506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000506.000000
mean3.61352411.36363611.1367790.0691700.5546956.28463468.5749013.7950439.549407408.23715418.455534356.67403212.65306322.532806
std8.60154523.3224536.8603530.2539940.1158780.70261728.1488612.1057108.707259168.5371162.16494691.2948647.1410629.197104
min0.0063200.0000000.4600000.0000000.3850003.5610002.9000001.1296001.000000187.00000012.6000000.3200001.7300005.000000
25%0.0820450.0000005.1900000.0000000.4490005.88550045.0250002.1001754.000000279.00000017.400000375.3775006.95000017.025000
50%0.2565100.0000009.6900000.0000000.5380006.20850077.5000003.2074505.000000330.00000019.050000391.44000011.36000021.200000
75%3.67708212.50000018.1000000.0000000.6240006.62350094.0750005.18842524.000000666.00000020.200000396.22500016.95500025.000000
max88.976200100.00000027.7400001.0000000.8710008.780000100.00000012.12650024.000000711.00000022.000000396.90000037.97000050.000000
data.hist(column="medv",bins=10)

png

Impressions: roughly following a normal distribution with a slight tail to the right, the median value of the most common inhabited dwellings is around $20,000.

for col in data.columns:
    fig,ax = plt.subplots(1,1,figsize=(5,1))    
    sns.boxplot(x=data[col],showmeans=True)
    plt.show()

png

png

png

png

png

png

png

png

png

png

png

png

png

png

Impressions: it seems that there are variables that have outliers, such as crim, zn, dis, ptratio, lstat, rm, black and the class to predict medv itself. It might be a good idea to remove outliers and then predict on the filtered data.

plt.figure(figsize=(20, 10))
sns.heatmap(data.corr().abs(), cmap='Greens', annot=True)

png

Impressions: Through this correlation matrix we observe several things:

  • There is a strong correlation between rad and tax variables.
  • The variables that correlate best with medv are rm and lstat (correlation coefficient >= 0.7).
  • The variables that correlate worst with medv are dis, black, crim, zn, age, rad. It may be interesting to try removing some of these variables later in the proposed model improvement to see how they actually influence the model (correlation does not imply causation).
sns.set_theme(style="ticks")

sns.scatterplot(data=data,x="crim",y="medv")

png

Impressions: It appears that when the per capita crime rate is lower the median house value includes all types of values, high and low, while when the rate is increasing the median house value starts to decrease.

sns.scatterplot(data=data,x="zn",y="medv")

png

Impressions: some possible relationship between zn and medv is observed when the proportion of residential area in areas larger than 25,000 sq. ft. is non-zero, with slightly higher average housing values observed when the proportion is higher.

sns.scatterplot(data=data,x="indus",y="medv")

png

Impressions: The relationship between indus and medv is not very clear, but a slightly decreasing trend is observed where a line could be drawn. It seems that when there is a higher proportion of business extension (excl. retail trade) in the city the median value of inhabited dwellings is lower, while if the proportion is low the median value is rather higher.

sns.scatterplot(data=data,x="nox",y="medv")

png

Impressions: It appears that when the concentration of nitrogen oxides is higher the median dwelling value is lower, while lower concentration values have higher median dwelling values.

sns.scatterplot(data=data,x="rm",y="medv")

png

Impressions: There is a strong relationship between the average number of rooms per dwelling and the average value of the dwelling, which is almost a linear relationship (correlation coefficient of 0.7). It seems that the higher the average number of rooms, the higher the average value of the dwelling.

sns.scatterplot(data=data,x="age",y="medv")

png

Impressions: where the proportion of pre-1940 occupied dwellings is high, there appear to be more lower average house values, although high values are also included, and as the proportion decreases, the average house value increases.

sns.scatterplot(data=data,x="dis",y="medv")

png

Impressions: When the weighted distance to five employment centres in Boston is low it appears that the median house value is lower overall (although a smaller number of high values are also included) and that as the distance increases the median house value increases.

sns.scatterplot(data=data,x="rad",y="medv")

png

Impressions: it appears that when the radial motorway accessibility index is low there are higher average housing values than when the index is high.

sns.scatterplot(data=data,x="tax",y="medv")

png

Impressions: it appears that when the property value tax per $10,000 is lower there are higher median housing values, whereas if the tax is high, lower housing values are found.

sns.scatterplot(data=data,x="ptratio",y="medv")

png

Impressions: it seems that when the pupil-teacher ratio in the city is higher there are lower average housing values, while if the ratio is low, higher average housing values are found.

sns.scatterplot(data=data,x="black",y="medv")

png

Impressions: no linear relationship is observed, it seems that higher values are found with higher average housing values.

sns.scatterplot(data=data,x="lstat",y="medv")

png

Impressions: There is a good linear relationship between the percentage of the population that is lower class and the median house value. It seems that the higher the percentage of the population that is lower class, the lower the median house value.

Task 3. Linear regression model

from sklearn.linear_model import LinearRegression
from math import sqrt

X=data[["crim", "zn", "indus","chas","nox","rm","age","dis","rad","tax","ptratio","black","lstat"]]
y=data["medv"]

reg = LinearRegression().fit(X, y)
y_pred=reg.predict(X)
print(reg.coef_)
print(reg.intercept_)
[-1.08011358e-01  4.64204584e-02  2.05586264e-02  2.68673382e+00
 -1.77666112e+01  3.80986521e+00  6.92224640e-04 -1.47556685e+00
  3.06049479e-01 -1.23345939e-02 -9.52747232e-01  9.31168327e-03
 -5.24758378e-01]
36.4594883850902
# R2
from sklearn.metrics import r2_score
r2_score(y,y_pred)
0.7406426641094095
# RMSE

from sklearn.metrics import mean_squared_error

sqrt(mean_squared_error(y, y_pred))
4.679191295697281

Analysis: An acceptable prediction is obtained on the training data, with an R2 coefficient not very close to the optimal value 1 , but also not offering random results.

The root mean square error is slightly high, in line with the R2 coefficient.

Task 4. Improving the linear regression model

Regularization methods

# L1 Lasso

from sklearn import linear_model
reg_lasso = linear_model.Lasso(alpha=1.0)

reg_lasso.fit(X,y)

y_pred_lasso=reg_lasso.predict(X)

reg_lasso.score(X,y)

print("R2:",r2_score(y,y_pred_lasso))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_lasso)))

R2: 0.6825842212709925
RMSE: 5.176494871751199
# L2 Ridge
reg_ridge=linear_model.Ridge(alpha=1.0)

reg_ridge.fit(X,y)

y_pred_ridge=reg_ridge.predict(X)

print("R2:",reg_ridge.score(X,y))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_ridge)))
R2: 0.7388703133867616
RMSE: 4.695151993608747
# L1 and L2 : Elastic net

reg_ElasticNet=linear_model.ElasticNet(alpha=1.0)

reg_ElasticNet.fit(X,y)

y_pred_ElasticNet=reg_ElasticNet.predict(X)

print("R2:",reg_ElasticNet.score(X,y))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_ElasticNet)))
R2: 0.6861018474345025
RMSE: 5.147731803213881

Analysis: no regularisation method has improved the prediction on the training data, which makes sense since these regularisations aim to make the model generalise better, thus being less specific, increasing the error.


Reduction of features

As mentioned in the exploratory analysis, after the observation made in the correlation matrix, we will try to eliminate those variables that do not present a high correlation between them and the variable to be predicted.

These are:

  • Crim
  • Zn
  • Chas
  • Age
  • Dis
  • Rad
  • Black
#Dropping variables: crim

X_delete=data[["zn", "indus", "rad","chas", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7349488253339125
RMSE: 4.730275100243263
#Dropping variables: zn

X_delete=data[["crim", "indus", "rad","chas", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7346146839915815
RMSE: 4.733255812629868
#Dropping variables: chas

X_delete=data[["crim", "indus", "rad","zn", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7355165089722999
RMSE: 4.725206759835133
#Dropping variables: age

X_delete=data[["crim", "indus", "rad","chas", "zn","nox","rm","dis","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7406412165505145
RMSE: 4.679204353734578
#Dropping variables: dis

X_delete=data[["crim", "indus", "rad","chas", "age","nox","rm","zn","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7117915551680203
RMSE: 4.932588467850775
#Dropping variables: rad

X_delete=data[["crim", "indus", "zn","chas", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7294255414274199
RMSE: 4.779307031347956
#Dropping variables: black

X_delete=data[["crim", "indus", "rad","chas", "age","nox","rm","dis","tax","ptratio","zn","lstat"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.7343070437613076
RMSE: 4.735998462783738
#Dropping variables: crim zn chas age dis rad black

X_delete=data[["rm","lstat","ptratio","nox","black"]]
y=data["medv"]

reg_delete = LinearRegression().fit(X_delete, y)
y_pred_delete=reg_delete.predict(X_delete)

print("R2:",r2_score(y,y_pred_delete))

print("RMSE:",sqrt(mean_squared_error(y, y_pred_delete)))
R2: 0.687763171482284
RMSE: 5.1340913976159746

Analysis: after the predictions made by reducing the number of variables, no reduction improves the prediction on the training data. It has been observed that the variable that produces the least error by removing it is age.

Task 5. Generalisation of the linear regression model

import numpy as np

data_shuffle = data.sample(frac=1,random_state=1).reset_index(drop=True)

percentage_split=int(0.7*data_shuffle.shape[0])

train=data_shuffle[:percentage_split]
test=data_shuffle[percentage_split:]

X_train=train[["crim", "zn", "indus","chas","nox","rm","age","dis","rad","tax","ptratio","black","lstat"]]
y_train=train["medv"]

X_test=test[["crim", "zn", "indus","chas","nox","rm","age","dis","rad","tax","ptratio","black","lstat"]]
y_test=test["medv"]

print("Size train data:",len(train),"instances.")

print("Size test data:",len(test),"instances.")

reg_general = LinearRegression().fit(X_train, y_train)

y_pred_general=reg_general.predict(X_test)

print("R2:",r2_score(y_test,y_pred_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_general)))

Size train data: 354 instances.
Size test data: 152 instances.
R2: 0.6776486248371075
RMSE: 5.4268766648382245

Analysis: Attempting to generalise the linear regression model without regularising or eliminating variables results in a larger error than on the Task 3 training data.

Regularisation methods

# L1 Lasso

reg_lasso_general = linear_model.Lasso(alpha=0.005)

reg_lasso_general.fit(X_train,y_train)

y_pred_lasso_general=reg_lasso_general.predict(X_test)

print("R2:",r2_score(y_test,y_pred_lasso_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_lasso_general)))

R2: 0.6781960540319253
RMSE: 5.422266644037676
# L2 
reg_ridge_general=linear_model.Ridge(alpha=0.3)

reg_ridge_general.fit(X_train,y_train)

y_pred_ridge_general=reg_ridge_general.predict(X_test)

print("R2:",r2_score(y_test,y_pred_ridge_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_ridge_general)))
R2: 0.6777253734106449
RMSE: 5.426230584397384
# L1 and L2 : Elastic net

reg_ElasticNet_general=linear_model.ElasticNet(alpha=1.0)

reg_ElasticNet_general.fit(X_train,y_train)

y_pred_ElasticNet_general=reg_ElasticNet_general.predict(X_test)

print("R2:",r2_score(y_test,y_pred_ElasticNet_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_ElasticNet_general)))
R2: 0.6607401850484715
RMSE: 5.567386838197249

Analysis: after the use of different regulation methods, it is observed that there is no significant improvement in prediction, there is a slight improvement in prediction using Lasso regularisation. It is noticeable the difference of applying these predictions on training data against test data, where a small improvement is observed.

Reduction of variables

As mentioned in the exploratory analysis, after the observation made in the correlation matrix, we will try to eliminate those variables that do not present a high correlation between them and the variable to be predicted.

These are:

  • Crim
  • Zn
  • Chas
  • Age
  • Dis
  • Rad
  • Black
#Dropping variables: crim

X_delete_train=train[["zn", "indus", "rad","chas", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
X_delete_test=test[["zn", "indus","rad","chas","age","nox","rm","dis","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6759286063605116
RMSE: 5.441335901433361
#Dropping variables: zn

X_delete_train=train[["crim", "indus", "rad","chas", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
X_delete_test=test[["crim", "indus","rad","chas","age","nox","rm","dis","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6673861423192364
RMSE: 5.512585743374873
#Dropping variables: chas

X_delete_train=train[["crim", "indus", "rad","zn", "age","nox","rm","dis","tax","ptratio","black","lstat"]]
X_delete_test=test[["crim", "indus","rad","zn","age","nox","rm","dis","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6753815942634213
RMSE: 5.445926281290249
#Dropping variables: age

X_delete_train=train[["zn", "indus", "rad","chas", "crim","nox","rm","dis","tax","ptratio","black","lstat"]]
X_delete_test=test[["zn", "indus","rad","chas","crim","nox","rm","dis","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))

# L2 
reg_ridge_general_age=linear_model.Ridge(alpha=0.3)

reg_ridge_general_age.fit(X_delete_train,y_train)

y_pred_ridge_general_age=reg_ridge_general_age.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_ridge_general_age))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_ridge_general_age)))
R2: 0.6810237811595643
RMSE: 5.398391048362369
R2: 0.6825063159972143
RMSE: 5.385831140417877
#Dropping variables: dis

X_delete_train=train[["crim", "indus", "rad","zn", "age","nox","rm","chas","tax","ptratio","black","lstat"]]
X_delete_test=test[["crim", "indus","rad","zn","age","nox","rm","chas","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6566467185102627
RMSE: 5.6008738256069535
#Dropping variables: rad

X_delete_train=train[["crim", "indus", "dis","zn", "age","nox","rm","chas","tax","ptratio","black","lstat"]]
X_delete_test=test[["crim", "indus","dis","zn","age","nox","rm","chas","tax","ptratio","black","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.665046948073654
RMSE: 5.531936134290245
#Dropping variables: black

X_delete_train=train[["crim", "indus", "dis","zn", "age","nox","rm","chas","tax","ptratio","rad","lstat"]]
X_delete_test=test[["crim", "indus","dis","zn","age","nox","rm","chas","tax","ptratio","rad","lstat"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6743000187926087
RMSE: 5.454991204989396
#Dropping variables: crim zn chas age dis rad black

X_delete_train=train[["rm","lstat","ptratio","nox","black"]]
X_delete_test=test[["rm","lstat","ptratio","nox","black"]]

reg_delete_general = LinearRegression().fit(X_delete_train, y_train)
y_pred_delete_general=reg_delete_general.predict(X_delete_test)

print("R2:",r2_score(y_test,y_pred_delete_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_delete_general)))
R2: 0.6449377831448135
RMSE: 5.6955729415015055

Analysis: generalising with the variable reductions, it is observed that the variable that causes the most noise in the model is age, where a perceptible, although very small, improvement has been achieved. Furthermore, by applying a regularisation method with the model without this variable, there is a very small improvement in generalisation.

Extra task

Three regression models provided by Sklearn will be tested. Specifically:

  • Support Vector Regression (SVR).
  • Decision Tree Regressor
  • Nearest Neighbors Regression
# SUPPORT VECTOR MACHINES - REGRESSION

from sklearn import svm

reg = svm.SVR()
reg.fit(X_train, y_train)

y_pred_general=reg.predict(X_test)


print("R2:",r2_score(y_test,y_pred_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_general)))
R2: 0.14911500540686573
RMSE: 8.816995526071912

Analysis: The generalised prediction on the data using SVR gives very poor results, with an R2 of almost 0 and a higher squared error than with the non-generalised model, it has not been possible to find a hyperplane or a set of them capable of separating the data, it is limited to the linearity of the data.

# REGRESSION TREES

from sklearn import tree

reg = tree.DecisionTreeRegressor()

reg.fit(X_train, y_train)

y_pred_general=reg.predict(X_test)


print("R2:",r2_score(y_test,y_pred_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_general)))


R2: 0.7478565977519767
RMSE: 4.799643627121541

Analysis: Generalised prediction on the data by applying a regression tree gives fairly good results, even improving on the performance obtained with a linear regression model. The ability to capture non-linear relationships between attributes and class has allowed good performance to be obtained.

# NEAREST NEIGHBOURS REGRESSION

from sklearn.neighbors import KNeighborsRegressor

reg = KNeighborsRegressor(n_neighbors=4)

reg.fit(X_train, y_train)

y_pred_general=reg.predict(X_test)


print("R2:",r2_score(y_test,y_pred_general))

print("RMSE:",sqrt(mean_squared_error(y_test, y_pred_general)))


R2: 0.47758482578165273
RMSE: 6.90864822018484

Analysis: Generalised prediction on the data using K-nearest neighbours fails to give good results, resulting in a larger error than the original model. These methods are highly dependent on the quality of the data and their scales, and the data collected are not standardised.