Compute Library
18.03
|
One can find caffe pre-trained models on caffe's official github repository.
The caffe_data_extractor.py provided in the scripts folder is an example script that shows how to extract the parameter values from a trained model.
Install caffe following caffe's document. Make sure the pycaffe has been added into the PYTHONPATH.
Download the pre-trained caffe model.
Run the caffe_data_extractor.py script by
python caffe_data_extractor.py -m <caffe model> -n <caffe netlist>
For example, to extract the data from pre-trained caffe Alex model to binary file:
python caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt
The script has been tested under Python2.7.
If the script runs successfully, it prints the names and shapes of each layer onto the standard output and generates *.npy files containing the weights and biases of each layer.
The arm_compute::utils::load_trained_data shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.
The script tensorflow_data_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a trained tensorflow model. A tensorflow model consists of the following two files:
{model_name}.data-{step}-{global_step}: A binary file containing values of each variable.
{model_name}.meta: A binary file containing a MetaGraph struct which defines the graph structure of the neural network.
Install tensorflow and numpy.
Download the pre-trained tensorflow model.
Run tensorflow_data_extractor.py with
python tensorflow_data_extractor -m <path_to_binary_checkpoint_file> -n <path_to_metagraph_file>
For example, to extract the data from pre-trained tensorflow Alex model to binary files:
python tensorflow_data_extractor -m /path/to/bvlc_alexnet -n /path/to/bvlc_alexnet.meta
Or for binary checkpoint files before Tensorflow 0.11:
python tensorflow_data_extractor -m /path/to/bvlc_alexnet.ckpt -n /path/to/bvlc_alexnet.meta
The script has been tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.
If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates .npy files containing the weights and biases of each layer.
The arm_compute::utils::load_trained_data shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.