I want to code up a linear regression using GPU capabilities.
I want to do this using matrix operations, not gradient descent.
Is there a way to use TensorFlow’s GPU capabilities to speed up matrix operations?
submitted by /u/sadfasn
[visit reddit] [comments]