# Programming Neural Networks with Python [Part 1]

In this post are presented the most important steps that we must follow to train and test a single-layer neural network (perceptron). In the first part, a simple example to train a logical gate “or” is presented, whereas the second part shows an example that uses a dataset of random values. However, before starting, we must previously install the following libraries:

1. Python 2.7
2. SciKit-Learn 1.4 (Machine learning library for Python)
3. MatPlotLib (Plotting library for Python)
4. NumPy (Package to perform scientific computing in Python)

Installing prerrequisites:

In order to install Python and the required dependencies in GNU/Linux (OpenSUSE), we must open the “Install/Uninstall Software” option of the Yast manager. After that,  just search the words “python”, “python-scikit-learn”, “numpy”, as is depicted in the following figures.

```#author vlarobbyk

#Load libraries
import numpy as np #Package for scientific computing
import matplotlib.pyplot as pp #Plotting library
import random # Library to generate random numbers
import math # Library to perform mathematical operations
from sklearn.linear_model import perceptron # Machine learning toolkit. We load the perceptron classifier (linear)

#Class definition (do not forget to use identation, it is fundamental in Python)
class RedPerceptron:
def __init__(self,total):
#Patterns that will be trained using our example, we define a matrix filled
global p
global test # Values to test the neural network
global targets # Vector of targets (salida deseada) desired output
p=np.array(np.zeros((total,2)))
targets=np.array(np.zeros((len(p))))
test=np.zeros((total,2))

#This methods randomly generates the patterns that will be learned by the Neural Network
def generar_data(self): # The half of points will be considered with target 1
# Filling the matrix with random values (row by row)
dx=0.0
for i in range(0,len(p)):
if i<=len(p)/2:
dx=5+random.uniform(-1,1)*3
p[i]=math.sqrt(25.0-math.pow((dx-5),2))
print "dx: ",dx,"\n"
p[i]=dx
test[i]=dx
test[i]=5+random.uniform(-1,1)*3
else:
dx=3+random.uniform(-1,1)*3
p[i]=math.sqrt(49.0-math.pow((dx-3),2))
p[i]=dx
test[i]=dx
test[i]=3+random.uniform(-1,1)*3

targets=np.array([1 if i>=(len(p)/2) else 0 for i in range(0,len(p))])

print p,"\n"
print targets,"\n"
# Draw the points generated and save to figure called "points.png"
mapacolores=np.array(['r','b','g'])
pp.scatter(p[:,0],p[:,1],s=50,c=mapacolores[targets])
pp.savefig('./points.png')
pp.close()
print p,"\n"

def entrenar_red(self):
print "Entrenando...\n"
print "Test: ",test
# Creating the Perceptron with the following parameters:
# 100 iterations, verbose activated,
# random_state (parameter used in shuffling mode)
# fit_intercept => centers the data (restamos la media de los valores)
# eta0 => constant by which the updates are multipled (similar a alfa)
nnetwork=perceptron.Perceptron(n_iter=10000,verbose=0,random_state=None,fit_intercept=True,eta0=0.002)
targets=np.array([1 if i>=(len(p)/2) else 0 for i in range(0,len(p))])
# Entrenamos la red
nnetwork.fit(p,targets)
# Print results
print "Salidas deseadas: "+str(targets)
print "Prediccion con el corpus de entrenamiento: "+str(nnetwork.predict(p))
print "Pesos = ["+str(nnetwork.coef_[0,0])+","+str(nnetwork.coef_[0,1])+"]"
print "Theta= "+str(nnetwork.intercept_)
print "Prediccion con el corpus de test: "+str(nnetwork.predict(test))
# Drawing the results and saving it to file
pesos=np.array([nnetwork.coef_[0,0],nnetwork.coef_[0,1]])
theta=nnetwork.intercept_
cortex1=-theta/pesos
cortex2=-theta/pesos
mapacolores=np.array(['r','b','g'])
pp.scatter(p[:,0],p[:,1],s=50,c=mapacolores[targets])
#Aqui deben agregar la linea de separacion y los puntos bien y mal clasificados
pp.savefig('./sep.png')

if __name__=="__main__":
redPerceptron=RedPerceptron(10)
redPerceptron.generar_data()
redPerceptron.entrenar_red()```

Results: