

With strace the last lines are as follows: openat(AT_FDCWD, "cnn_models/VGG_ILSVRC_16_layers_", O_RDONLY) = 4įstat(4, ) I also managed to add swap space to the TX1 board (by compiling the kernel with swap support) but the script fails immediately without starting to fill the RAM or using swap space. The same scenario with all the packages in place is tested on a x86_64 server on CPU and works ok (it consumes approx 32GB ram with time but it works ok by using swap if I have smaller RAM), meanwhile here on the TX1 I get segfault running the script on cpu and gpu also. Successfully loaded cnn_models/VGG_ILSVRC_16_layers.caffemodel The total number of bytes read was 553432081 To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.

If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons. Reading dangerously large protocol message. Max question sequence length in data is 15 QADatasetLoader loading h5 file: data/qa_data.h5 QADatasetLoader loading json file: data/qa_data.json QADatasetLoader loading dataset file: visual7w-toolkit/datasets/visual7w-telling/dataset.json The execution is done on Jetson TX1 on GPU with the below parameters and output: $ th train_a -gpuid 0 -mc_evaluation -verbose -finetune_cnn_after -1 I am having an issue executing the train_a script (from ) and I get segmentation fault error immediately after the successfully loaded cnn_models.
