Формат | Размер | Скачать |
---|
Название | : | |
Продолжительность | : | |
Пользователь | : | id 905781093427 |
Дата публикации | : | ript src= |
Просмотры | : | layer\/3b96d06c\/www-embed-player.vflset\/www-embed-player.js |
Понравилось | : | 665 |
Не понравилось | : | 66 |
there is small typo in sigmoid fuction (1:00)
As-is: h_hat = 1 / (1 * e^ "-wx+b") To-be: h_hat = 1 / (1 * e^ "-(wx+b)") Always appreciate you these great videos~ Комментарий от : @kstyle8546 |
Very nice Explanation ... Thanks Комментарий от : @drrbalasubramanianmsu1593 |
i was looking for a basic form logistic regression model using algorithmic modeling. thanks you very much . i like your video Комментарий от : @hoami8320 |
this is great thank you!!! Комментарий от : @arieljiang8198 |
hello, in which environment the python codes are written ? Комментарий от : @amirhosseintalebi6770 |
so to evaluate test data we should not use fit_transform. ....... transform only requires?? Комментарий от : @dhivakarsomasundaram21 |
THANK YOU TO THE MOON AND BACK... BEST EVER EXPLANATION I HAD SEEN Комментарий от : @priyanj7010 |
You have explained this very easily. Keep it going on. :) You saved my Ass!!! Комментарий от : @OK-bu2qf |
I wonder how can I plot the logistic regression line calculated from that (the boundary ) Комментарий от : @yukiyoshimoto502 |
Where is the entropy loss implemented in this code? Комментарий от : @umarmughal5922 |
Thank you buddy! This gives me a lot of sense after my self study of Machine Learning, and using a inbuild sklearn models. Комментарий от : @sushilkokil2439 |
what about loss function? Комментарий от : @sat4GD |
isnt it a single layer neural net with a sigmoid activation function? Комментарий от : @robosergTV |
Love your video!!! Комментарий от : @HuyNguyen-fp7oz |
Where is the loss applied pleased? Комментарий от : @matthewking8468 |
The way you relate Linear Regression to Logistic Regression makes it so clear thank you so much! Комментарий от : @kougamishinya6566 |
Thank you very much for your video. I wonder why you are not checking your model at each iteration and returning the model with lowest error, instead of returning the model with the last w and b parameters of the for loop. Комментарий от : @burcakotlu7858 |
i thank you good sir ! Комментарий от : @redouanebelkadi5068 |
TypeError: unsupported operand type(s) for *: 'int' and 'NoneType' Комментарий от : @jaimehumbertorinconespinos3790 |
Very nice! Комментарий от : @ahmadtarawneh2990 |
thank you so much. Комментарий от : @unfinishedsentenc9864 |
thank you !!! your videos help me a lot :) Комментарий от : @marilyncancino4875 |
Thanks for the video, now everything makes sense that what is going on in the behind. Комментарий от : @TanmayShrivastava17 |
i think you use the linear Regression cost function not the logistic Regression cost function in your code Комментарий от : @4wanys |
I'm a bit confused the teacher showed us a different way where the gradient descent is calculated by an ugly formula that involves logarithm Комментарий от : @babaabba9348 |
Thank you for summarization Комментарий от : @gurekodok |
You save me. thankyou Комментарий от : @ubaidhunts |
congrat because lot of people do not do it from scratch Комментарий от : @akhadtop2067 |
How implement one vs rest from scratch and intigrate with logistic regression Комментарий от : @shaikrasool1316 |
Great insight thank you. It would be even better if you could have shown us the raw data and just explained the variables and what exactly we were trying to predict etc....thanks Комментарий от : @sz8558 |
thx dude was searching all over the web if you have to put the truncating mechanism with 0.5 into the predict function which is used by gradient descent/ cost f but you successfully showed me that its just for the prediction hypo which is used afterwards Комментарий от : @_inetuser |
Can you explain me why i am getting this error ValueError: not enough values to unpack (expected 2, got 1) def fit(self, X, y): ---> 12 n_samples, n_features = X.shape in this 12 number line Edit : when i am doing this with normal logistic regression function from sklearn it works but why not with the one we created Комментарий от : @KUNALSINGH-zk8su |
Glad I discovered this video.)))))) Комментарий от : @mrfrozen97-despicable |
I have 2 qs: 1.why we are transposing x(i checked from numpy documentation it is used to change the dimension, but i cannot get the point here) 2.how we r getting the summation without applying np.sum Can you please ans ? Комментарий от : @satyakikc9152 |
Nice video, thanks a lot. Very good compared to comparable ones that i have looked at. Комментарий от : @paulbrown5839 |
Hi your videos are just awesome! One question: the iterations of the fit method's for loop should correspond to neural network's hidden layers . it's true? Комментарий от : @Mar-kb8yq |
just started learning this and try running the code on jupyter notebook, It keeps saying no module named logistic regression it might be stupid one, but please let me know why it's happening Комментарий от : @justAdancer |
Thank you sir. if we want to use elastic net regularization along with this logistic regression.....how should we approach? Комментарий от : @goodboytobi8202 |
Great job. Your code is so concise and logical, but does require a solid background. Just wish I got better accuracy. I used it for the "Titanic" dataset on Kaggle and could only get 66%. Thats the lowest of all the models I have tried. Monte Carlo Markov Chain gave me the best so far at 78%. Any idea of how I can get a better score? Комментарий от : @bryanchambers1964 |
Is learning calculus a pre-requisite to this series -- I am learning, but feel a bit lost when it comes to the implementations because it is difficult for me to understand the underlying mathematical concepts. I do appreciate the videos! Комментарий от : @PaulWalker-lk3gi |
Can I use this in writer identification?? Can U respond fast Комментарий от : @keerthyeran1742 |
When I code and run this model on the "advertising data set" from Kaggle the accuracy is only in the 40-50% range while the sklearn LogisticRegression model is over 90%. I've tried varying the number of iterations and learning but I can't get an accuracy score above 50%. Комментарий от : @nackyding |
Great work bro , I am sure you will reach 100K soon . Best of luck Комментарий от : @jayantmalhotra1449 |
We dont need to define accuracy function. We can use sklearn.metric.accuracy_score instead Комментарий от : @damianwysokinski3285 |
I have questions regards random_state, some sources set it to 42 or 95 and when I changed this number, accuracy change as well, for example in the make_blobs dataset if I set it to 95, the classifier gave a good accuracy(~99%) but when I set it to 42 it gave around 88%. Also, I got this error (RuntimeWarning: overflow encountered in exp return 1 / (1 + np.exp(-x)) with exp when I change the learning rate value. Комментарий от : @nadabu.274 |
Excellent video. I really start to have a good understanding of the ML algorithm after I watch your videos. Комментарий от : @nadabu.274 |
Hi I wrote your code and tried applying on the below custom dataset. But it raised an error. and i also tried reshaping the vector into 3 features that also gave me error it only work on the code you gave why is that????help me again i tried loading boston dataset and tried your code after that also I faced en error .could you tell me why is that X = np.arange(10) X_train = np.arange(7) y_train = np.arange(7) X_test = np.array([8,9,10]) y_test = np.array([8.5,9.5,10.5]) Комментарий от : @abhisekhagarwala9501 |
-(wx+b) instead of -wx+b Комментарий от : @global_southerner |
I follow the exact thing you did, yet I get an error that says “object has no attribute ‘sigmoid’ although I typed the exact thing? In addition your code in video and in github is different and needs updating example learning rate with lr :) Комментарий от : @bassamal-kaaki3253 |
SHOUDNT DW HAVE AN NP.SUM OUTSIDE AS WELL, YOU HAVE SUMMED UP THE DB's BUT NOT THE DW's Комментарий от : @adithyarajagopal1288 |
hi , i am getting a runtime warning RuntimeWarning: overflow encountered in exp return 1/(1+np.exp(-x)) 0.8947368421052632 what should i do to avoid this? Комментарий от : @prashantsharmastunning |
Very good! Комментарий от : @DanielDaniel-hk9el |
I used this Logistic Regression model algorithm for prediciton of disease and while pickling i got this error for this model . Can you please explain what kind of error is this and how to overcome . Plz help me out UnpicklingError: invalid load key, '\xe2'. Комментарий от : @kritamdangol5349 |
Hi bro, you have used squared loss function . But logistic regression has log loss .If we derivate square loss wrt to w and b ,do we get same as log loss derivative? Комментарий от : @arungovindarajan93 |
This is the most clear explanation I have seen!! Thank you so much !! :) Комментарий от : @ireneashamoses4209 |
Hi , in order to find bias and weights , why you just didn't solve the system of equations , it's quadratic function and has global minimum? Why do we need to apply gradient descent technique in this case? And the second question why you defined number of iterations instead of defining minimal difference between weighs and also biases of subsequent iterations and based on that to stop running, thus you don't need to choose learning rate? Комментарий от : @freddiebrumin2059 |
How can I plot the sigmoid curve the same way you plotted the fitted line in Linear Regression at the end? Комментарий от : @nikolayandcards |
Sir, my code(sigmoid function) is giving exp overflow error in its iteration.How can I overcome it? Комментарий от : @anmolvarshney8938 |
Hello, please is there any way we could have access to the JUPYTER NOTEBOOK you made reference to in this video. Комментарий от : @adbeelomiunu7816 |
Why use gradient decent ... can't we make the derivative with respect to the parameter equal to zero and find out the parameter w and b ....by solving the equations... please answer this question I really need the answer Комментарий от : @thecros1076 |
Excellent! Комментарий от : @fahimfaisal4660 |
Hi, Can this algorithm be extended to a multi class problem?.. Комментарий от : @alexanderperegrinaochoa7491 |
Hi...i am using the below code line to update weight and bias but is giving me the error..could u please help here is the code. w = w - (alpha_lr_rate * dw) b = b - (alpha_lr_rate * db) where w = np.random.normal(loc=0.0, scale=1, size = X.shape[1]) b=0 error: operands could not be broadcast together with shapes (15,) (15,37500) Комментарий от : @reetikagour1203 |
Why do we use the bias? Комментарий от : @lucas.vieira |
Lovelyyyyy. Cheers! Комментарий от : @dhananjaykansal8097 |
This is a great tutorial! Thank you! Комментарий от : @parismollo7016 |
Logistic Regression reduces Log Loss. U r reducing Square Loss... Why it is so? Комментарий от : @passionatedevs8158 |
Could you please tell me how to do the logistic regression with L2 regularization? Комментарий от : @stevewang5112 |
how we can update it to multiclass version, more 2 lable ? Комментарий от : @keshavarzpour |
Things are explained much more elaborately Комментарий от : @ranitbandyopadhyay |