snakcorp.de

countre strik 1.6


All times are UTC + 1 hour



Post new topic Reply to topic  [ 1 post ] 
Author Message
Offline

Joined: Mar 2016
Posts: 150
Gender: None specified
PostPost subject: Nn Model Sugar Hit
Posted: 21 Apr 2016, 03:43 

There's a popular story about neural networks that goes as follows. In a similar spirit, I'd like to question the notion that real-world large-scale linear classifiers are truly interpretable in any way that is real and useful. :) Select Month April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 October 2013 October 2012 Tagsbernie sanders love the internet this week in net art Follow NewHive You can follow NewHive on Twitter, Facebook, Tumblr and Instagram. In this sense you could argue that both neural networks and linear models are similarly interpretable. It's also important to note that as neural networks are differentiable from top to bottom via back-propagation it is possible to trace the contribution of each input to the output and show which inputs will move the output most if slightly increased or decreased. (Deep Learnings Deep Flaws)s Deep Flaws Data Sciences Most Used, Confused, and Abused Jargon MetaMind Competes with IBM Watson Analytics and Microsoft Azure Machine Learning IBM Watson Analytics vs. Interpret What?. About NewHive NewHive is a multimedia publishing platform. Further, why should we want to manually take account of all features in any given model? Isn't our purpose in part to make build models which can learn from a greater number of features than any human could consciously account for?. I have heard this story several times before, but it's hard to precisely quantitatively access how prevalent this story is.
66bede9e92

You might think that if the model for diabetes is correct, that sugar intake must be positively correlated with diabetes, and should thus get positive weight or that exercise must be negatively correlated, and should thus get negative weight in the model. Regarding the second notion of interpretability, this clearly is more of a problem for methods like neural networks which rely upon non-convex optimizations than for simpler models like logistic regression which present a straight-forward convex optimization problem. Further, mapping back from reduced dimension feature sets to the original feature space can be complicated or ambiguous. Adding further complexity, high dimensional linear models are often subject to a combination of dimensionality reduction and regularization. Neural networks, on the other hand are black boxes. While our toy example doesn't fully articulate the depth of this kind of problem, it becomes apparent when considering linear models with tens or hundreds of thousands of features as are common in text mining tasks. Zachary Chase Lipton. You can also subscribe to NewHive's weekly newsletter.


Top
  Profile 
 

Who is online

Users browsing this forum: No registered users and 0 guests

Permissions of this forum:

You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
Post new topic Reply to topic  [ 1 post ] 


cron
Créer un forum | © phpBB | Entraide & support | Forum gratuit