Problem while implementing "Gradient Descent Algorithm" in Matlab
10 vues (au cours des 30 derniers jours)
Afficher commentaires plus anciens
I'm solving a programming assignment in machine learning course. In which I've to implement "Gradient Descent Algorithm" like below
I'm using the following code
data = load('ex1data1.txt');
% text file conatins 2 values in each row separated by commas
X = [ones(m, 1), data(:,1)];
theta = zeros(2, 1);
iterations = 1500;
alpha = 0.01;
function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
k=1:m;
j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
j2=((1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k)))*(X(k,2))
theta(1)=theta(1)-alpha*(j1);
theta(2)=theta(2)-alpha*(j2);
J_history(iter) = computeCost(X, y, theta);
end
end
theta = gradientDescent(X, y, theta, alpha, iterations);
On running the above code I'm getting this error message
3 commentaires
Réponse acceptée
Plus de réponses (11)
Jayan Joshi
le 15 Oct 2019
Modifié(e) : Jayan Joshi
le 15 Oct 2019
predictions =X*theta;
theta=theta-(alpha/m*sum((predictions-y).*X))';
0 commentaires
Margo Khokhlova
le 19 Oct 2015
Modifié(e) : Walter Roberson
le 19 Oct 2015
Well, sort of super late, but you just made it wrong with the brackets... This one works for me:
k=1:m;
j1=(1/m)*sum((theta(1)+theta(2).*X(k,2))-y(k))
j2=(1/m)*sum(((theta(1)+theta(2).*X(k,2))-y(k)).*X(k,2))
theta(1)=theta(1)-alpha*(j1);
theta(2)=theta(2)-alpha*(j2);
1 commentaire
Shekhar Raj
le 19 Sep 2019
Below Code works for me -
Prediction = X * theta;
temp1 = alpha/m * sum((Prediction - y));
temp2 = alpha/m * sum((Prediction - y) .* X(:,2));
theta(1) = theta(1) - temp1;
theta(2) = theta(2) - temp2;
2 commentaires
Jayan Joshi
le 15 Oct 2019
Thank you this really helped. I tried more vectorized form of this and it worked.
predictions =X*theta;
theta=theta-(alpha/m*sum((predictions-y).*X))';
Lomg Ma
le 24 Jan 2021
How did you manage to vectorize it that much? I don't understand how to translate the formula to code, seems confusing
Sesha Sai Anudeep Karnam
le 7 Août 2019
Modifié(e) : Sesha Sai Anudeep Karnam
le 7 Août 2019
temp0 = theta(1)-alpha*((1/m)*(theta(1)+theta(2).*X(k,2)-y(k)));
temp1 = theta(2)- alpha*((1/m)*(theta(1)+theta(2).*X(k,2)-y(k)).*X(k,2));
theta(1) = temp0;
theta(2) = temp1;
% this code gives approximate values but while submitting I'm getting 0points for this
% Theta found by gradient descent:
% -3.588389
% 1.123667
% Expected theta values (approx)
% -3.6303
% 1.1664
% How to overcome this??
2 commentaires
Shekhar Raj
le 19 Sep 2019
Below code gave the exact value -
for iter = 1:num_iters
% ====================== YOUR CODE HERE ======================
% Instructions: Perform a single gradient step on the parameter vector
% theta.
%
% Hint: While debugging, it can be useful to print out the values
% of the cost function (computeCost) and gradient here.
%
Prediction = X * theta;
temp1 = alpha/m * sum((Prediction - y));
temp2 = alpha/m * sum((Prediction - y) .* X(:,2));
theta(1) = theta(1) - temp1;
theta(2) = theta(2) - temp2;
% ============================================================
Amber Hall
le 15 Août 2021
i've tried this code but still get error due to not enough input arguments for m = length(y) ? do you know what may be the cause as it appears i have coded correctly
ICHEN WU
le 8 Nov 2015
Can you tell me why my answer is not correct? I felt they are the same.
theta(1)=theta(1)-(alpha/m)*sum( (X*theta)-y);
theta(2)=theta(2)-(alpha/m)*sum( ((X*theta)-y)'*X(:,2));
5 commentaires
pavan B
le 20 Fév 2017
above one works perfect .try below code of mine too
earlier i used h = X * theta; a0 = (1/m)*sum((h-y)); a1 = (1/m)*sum((h-y)'*x1); surprisingly it didn't work
working code: x1 = X(:,2); a0 = (1/m)*sum((X * theta-y)); a1 = (1/m)*sum((X * theta-y)'*x1); a = [a0;a1]; theta = theta- (alpha*a);
if anyone find out whats wrong with my earlier code it would be appreciated.
Leon Cai
le 6 Avr 2017
yea I tried h = X*theta and it didn't work too, I'm thinking that when we use the variable h, as we update theta, the value of h will remain unchanged.
Ali Dezfooli
le 17 Juin 2016
In this line
X = [ones(m, 1), data(:,1)];
You add bias to your X, but in the formula of your picture (Ng's slides) when you want to compute theta(2) you should remove it.
0 commentaires
Utkarsh Anand
le 17 Mar 2018
Looking at the problem, I also think that you cannot initiate Theta as Zero.
0 commentaires
Rajeswari G
le 2 Jan 2021
error = (X * theta) - y;
theta = theta - ((alpha/m) * X'*error);
In this equation why we take x'?
1 commentaire
Bee Ling TAN
le 15 Août 2021
This is because X is a 97x2 matrix. To perform dot products, only X' (2x97)will make the answer valid to be 2x1 vectors, entrys are theta(1)&theta(2) respectively.
Wamin Thammanusati
le 21 Fév 2021
Modifié(e) : Wamin Thammanusati
le 21 Fév 2021
The code below works for this case (one variable) and also multiple variables -
for iter = 1:num_iters
Hypothesis = X * theta;
for i=1:size(X,2)
theta(i) = theta(i) - alpha/m * sum((Hypothesis-y) .* X(:,i));
end
end
1 commentaire
Amber Hall
le 15 Août 2021
having tried the same code i am struggling to understand what i am doing wrong - i receive error due to not enough jnput arguments for m = length(y) line. do you have any ideas?
Chong Lu
le 16 Nov 2021
Modifié(e) : Walter Roberson
le 27 Nov 2021
temp1 = theta(1) - alpha*(sum(X*theta - y)/m);
temp2 = theta(2) - alpha*(sum((X*theta - y).*X(:,2))/m);
theta(1) = temp1;
theta(2) = temp2;
0 commentaires
Voir également
Catégories
En savoir plus sur Logical dans Help Center et File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!