summaryrefslogtreecommitdiff
path: root/src
AgeCommit message (Expand)AuthorFilesLines
2014-07-17collect CUDA includes and calls, separate from CPU-only mode, leave outEvan Shelhamer48-238/+276
2014-07-17add guards to drop GPU code in CPU-only modeEvan Shelhamer3-80/+93
2014-07-17stub out GPU layer methods to crash loudly in CPU-only modeEvan Shelhamer26-0/+102
2014-07-15Another bugfix related to my CPU/GPU test changes: make NetTest aJeff Donahue1-1/+1
2014-07-15Add Net Test to verify correct param_propagate_down behavior.Jeff Donahue1-8/+111
2014-07-15Use Blobs instead of SyncedMemorys for the bias_multiplier_'s.Jeff Donahue4-20/+12
2014-07-15Make ConvolutionLayer and InnerProductLayer abide by param_propagate_down_Jeff Donahue4-79/+112
2014-07-15Add param_propagate_down_ vector to layer, populate according toJeff Donahue1-1/+11
2014-07-15use layer_param instead of layers_[layer_id]->layer_param()Jeff Donahue1-3/+2
2014-07-14Fix SoftmaxLayerTest: forgot to change this one to use DtypesAndDevices;Jeff Donahue1-8/+11
2014-07-14Seed HingeLossLayerTest; bad values can cause test (and therefore TravisJeff Donahue1-0/+1
2014-07-14Move test headers to include/.Jeff Donahue42-2240/+616
2014-07-10Replace cudaMemcpy with caffe_gpu_memcpy in SyncedMemory per @longjonKai Li3-10/+6
2014-07-10Implement @Yangqing's solution to copy memory in the SyncedMemoryKai Li1-4/+2
2014-07-10Switch to GPU mode when pointer is move to or from GPU in SyncedMemoryKai Li1-0/+2
2014-07-10Check the GPU mode to decide which memcpy to useKai Li1-6/+6
2014-07-10Avoid using cudaMemcpy for memcpy when there is no GPU and CUDA driverKai Li2-2/+12
2014-07-07Merge pull request #614 from ronghanghu/rectangular_poolingJeff Donahue4-174/+526
2014-07-07added gradient check for non-square poolingRonghang Hu1-86/+128
2014-07-07fixed style errorsRonghang Hu3-21/+23
2014-07-05add tests for rectangular pooling regionsRonghang Hu4-55/+313
2014-07-05fixing pooling SetUp() to allow default values for stride and padRonghang Hu1-10/+10
2014-07-03Update pooling_layer.cuRonghang Hu1-53/+64
2014-07-03Update pooling_layer.cppRonghang Hu1-24/+57
2014-07-03Update caffe.protoRonghang Hu1-5/+11
2014-07-03fix casts (static for void*)Evan Shelhamer1-12/+12
2014-07-03reduce caffe_copy to instantiations, split off caffe_memcpy for void*Evan Shelhamer4-28/+14
2014-07-03replace all memset with caffe_set() / caffe_gpu_set()Evan Shelhamer8-28/+18
2014-07-03replace all memcpy by caffe_copyEvan Shelhamer21-80/+66
2014-07-03do all caffe_copy() as UVA mem copy, and drop caffe_gpu_copy()Evan Shelhamer1-8/+13
2014-07-03replace softmax cudaMemcpy with caffe_gpu_copyEvan Shelhamer1-4/+2
2014-07-03switch to unified virtual addressing CUDA memcpyEvan Shelhamer8-16/+16
2014-07-03report UVA in platform testEvan Shelhamer1-0/+2
2014-07-03ConvolutionLayer can take N bottom blobs and N top blobsJeff Donahue3-130/+195
2014-06-29Merge pull request #545 from jamt9000/im2col-kernel-testEvan Shelhamer1-0/+125
2014-06-28lintEvan Shelhamer2-1/+2
2014-06-28Remove Cuda.major >= 2 check on Dropout testSergio1-8/+0
2014-06-27Check that pointers are different before copying in caffe_copy and caffe_gpu_...Sergio1-4/+12
2014-06-27Added test to Dropout to check gradients during Test phaseSergio2-1/+27
2014-06-27Fix var names in Dropout.cuSergio1-1/+1
2014-06-27Modify Dropout to allow backward pass in TEST phaseSergio2-13/+19
2014-06-27Comment-fix.Rob Hess1-1/+1
2014-06-27Update name of last added param.Rob Hess1-1/+1
2014-06-27Add unit test for accuracy layer.Rob Hess1-0/+90
2014-06-27Next LayerParameter proto idcypof1-1/+1
2014-06-27Use vectors instead of arrays.Rob Hess1-6/+4
2014-06-27Compute top-k accuracy in AccuracyLayer.Rob Hess1-11/+23
2014-06-27Incorporate top_k param into AccuracyLayer and check it's value.Rob Hess2-1/+4
2014-06-27Add parameter for AccuracyLayer in proto.Rob Hess1-0/+9
2014-06-27Test for im2col kernelJames Thewlis1-0/+125