212k views
0 votes
Finally you will implement the full Pegasos algorithm. You will be given the same feature matrix and labels array as you were given in Full Perceptron Algorithm. You will also be given T , the maximum number of times that you should iterate through the feature matrix before terminating the algorithm. Initialize θ and θ0 to zero. For each update, set η=1t√ where t is a counter for the number of updates performed so far (between 1 and nT inclusive). This function should return a tuple in which the first element is the final value of θ and the second element is the value of θ0 . Note: Please call get_order(feature_matrix.shape[0]), and use the ordering to iterate the feature matrix in each iteration. The ordering is specified due to grading purpose. In practice, people typically just randomly shuffle indices to do stochastic optimization. Available Functions: You have access to the NumPy python library as np and pegasos_single_step_update which you have already implemented.

1 Answer

1 vote

Answer:

In[7] def pegasos(feature_matrix, labels, T, L):

"""

.

let learning rate = 1/sqrt(t),

where t is a counter for the number of updates performed so far (between 1 and nT inclusive).

Args:

feature_matrix - A numpy matrix describing the given data. Each row

represents a single data point.

labels - A numpy array where the kth element of the array is the

correct classification of the kth row of the feature matrix.

T - the maximum number of times that you should iterate through the feature matrix before terminating the algorithm.

L - The lamba valueto update the pegasos

Returns: Is defined as a tuple in which the first element is the final value of θ and the second element is the value of θ0

"""

(nsamples, nfeatures) = feature_matrix.shape

theta = np.zeros(nfeatures)

theta_0 = 0

count = 0

for t in range(T):

for i in get_order(nsamples):

count += 1

eta = 1.0 / np.sqrt(count)

(theta, theta_0) = pegasos_single_step_update(

feature_matrix[i], labels[i], L, eta, theta, theta_0)

return (theta, theta_0)

In[7] (np.array([1-1/np.sqrt(2), 1-1/np.sqrt(2)]), 1)

Out[7] (array([0.29289322, 0.29289322]), 1)

In[8] feature_matrix = np.array([[1, 1], [1, 1]])

labels = np.array([1, 1])

T = 1

L = 1

exp_res = (np.array([1-1/np.sqrt(2), 1-1/np.sqrt(2)]), 1)

pegasos(feature_matrix, labels, T, L)

Out[8] (array([0.29289322, 0.29289322]), 1.0)

Step-by-step explanation:

In[7] def pegasos(feature_matrix, labels, T, L):

"""

.

let learning rate = 1/sqrt(t),

where t is a counter for the number of updates performed so far (between 1 and nT inclusive).

Args:

feature_matrix - A numpy matrix describing the given data. Each row

represents a single data point.

labels - A numpy array where the kth element of the array is the

correct classification of the kth row of the feature matrix.

T - the maximum number of times that you should iterate through the feature matrix before terminating the algorithm.

L - The lamba valueto update the pegasos

Returns: Is defined as a tuple in which the first element is the final value of θ and the second element is the value of θ0

"""

(nsamples, nfeatures) = feature_matrix.shape

theta = np.zeros(nfeatures)

theta_0 = 0

count = 0

for t in range(T):

for i in get_order(nsamples):

count += 1

eta = 1.0 / np.sqrt(count)

(theta, theta_0) = pegasos_single_step_update(

feature_matrix[i], labels[i], L, eta, theta, theta_0)

return (theta, theta_0)

In[7] (np.array([1-1/np.sqrt(2), 1-1/np.sqrt(2)]), 1)

Out[7] (array([0.29289322, 0.29289322]), 1)

In[8] feature_matrix = np.array([[1, 1], [1, 1]])

labels = np.array([1, 1])

T = 1

L = 1

exp_res = (np.array([1-1/np.sqrt(2), 1-1/np.sqrt(2)]), 1)

pegasos(feature_matrix, labels, T, L)

Out[8] (array([0.29289322, 0.29289322]), 1.0)

User Powisss
by
3.7k points