19.05
|
One can find caffe pre-trained models on caffe's official github repository.
The caffe_data_extractor.py provided in the scripts folder is an example script that shows how to extract the parameter values from a trained model.
Install caffe following caffe's document. Make sure the pycaffe has been added into the PYTHONPATH.
Download the pre-trained caffe model.
Run the caffe_data_extractor.py script by
python caffe_data_extractor.py -m <caffe model> -n <caffe netlist>
For example, to extract the data from pre-trained caffe Alex model to binary file:
python caffe_data_extractor.py -m /path/to/bvlc_alexnet.caffemodel -n /path/to/caffe/models/bvlc_alexnet/deploy.prototxt
The script has been tested under Python2.7.
If the script runs successfully, it prints the names and shapes of each layer onto the standard output and generates *.npy files containing the weights and biases of each layer.
The arm_compute::utils::load_trained_data shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.
The script tensorflow_data_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a trained tensorflow model. A tensorflow model consists of the following two files:
{model_name}.data-{step}-{global_step}: A binary file containing values of each variable.
{model_name}.meta: A binary file containing a MetaGraph struct which defines the graph structure of the neural network.
Install tensorflow and numpy.
Download the pre-trained tensorflow model.
Run tensorflow_data_extractor.py with
python tensorflow_data_extractor -m <path_to_binary_checkpoint_file> -n <path_to_metagraph_file>
For example, to extract the data from pre-trained tensorflow Alex model to binary files:
python tensorflow_data_extractor -m /path/to/bvlc_alexnet -n /path/to/bvlc_alexnet.meta
Or for binary checkpoint files before Tensorflow 0.11:
python tensorflow_data_extractor -m /path/to/bvlc_alexnet.ckpt -n /path/to/bvlc_alexnet.meta
The script has been tested with Tensorflow 1.2, 1.3 on Python 2.7.6 and Python 3.4.3.
If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates .npy files containing the weights and biases of each layer.
The arm_compute::utils::load_trained_data shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.
The script tf_frozen_model_extractor.py extracts trainable parameters (e.g. values of weights and biases) from a frozen trained Tensorflow model.
Install Tensorflow and NumPy.
Download the pre-trained Tensorflow model and freeze the model using the architecture and the checkpoint file.
Run tf_frozen_model_extractor.py with
python tf_frozen_model_extractor -m <path_to_frozen_pb_model_file> -d <path_to_store_parameters>
For example, to extract the data from pre-trained Tensorflow model to binary files:
python tf_frozen_model_extractor -m /path/to/inceptionv3.pb -d ./data
If the script runs successfully, it prints the names and shapes of each parameter onto the standard output and generates .npy files containing the weights and biases of each layer.
The arm_compute::utils::load_trained_data shows how one could load the weights and biases into tensor from the .npy file by the help of Accessor.
Using one of the provided scripts will generate files containing the trainable parameters.
You can validate a given graph example on a list of inputs by running:
LD_LIBRARY_PATH=lib ./<graph_example> --validation-range='<validation_range>' --validation-file='<validation_file>' --validation-path='/path/to/test/images/' --data='/path/to/weights/'
e.g:
LD_LIBRARY_PATH=lib ./bin/graph_alexnet –target=CL –layout=NHWC –type=F32 –threads=4 –validation-range='16666,24998' –validation-file='val.txt' –validation-path='images/' –data='data/'
where: validation file is a plain document containing a list of images along with their expected label value. e.g:
val_00000001.JPEG 65 val_00000002.JPEG 970 val_00000003.JPEG 230 val_00000004.JPEG 809 val_00000005.JPEG 516
–validation-range is the index range of the images within the validation file you want to check: e.g:
–validation-range='100,200' will validate 100 images starting from 100th one in the validation file.
This can be useful when parallelizing the validation process is needed.