Converting HuggingFace GPT-2 Models to Tensorflow 1.x

HuggingFace Transformers is a wonderful suite of tools for working with transformer models in both Tensorflow 2.x and Pytorch. However, many tools are still written against the original TF 1.x code published by OpenAI. Unfortunately, the model format is different between the TF 2.x models and the original code, which makes it difficult to use models trained on the new code with the old code. There are many tools for converting the old format to TF 2.x and Pytorch, but not vice versa. In this blog post, I will share the (frustrating) process of getting the conversion to work.

I just want the code!

The complete code is available here. If there are any improvements you’d like to make, please open a PR!

First Attempt

My first attempt was to use TFGPT2LMHeadModel to convert Pytorch models to tensorflow, and then save a tensorflow checkpoint immediately using save_pretrained. However, I immediately ran into a problem: save_pretrained saves the result as an HDF5 file, instead of as a TF checkpoint. After some mucking around, I found that the save_pretrained method called the save_weights method with a fixed tf_model.h5 filename, and save_weights inferred the save format via the extension. The solution was just to call save_weights directly, bypassing the hardcoded filename. This wouldn’t save the .meta file containing the graph, but since the graph was the same as in the original OpenAI checkpoints, they could just be copied over.

The model checkpoint seemed to have the right format, so I put the resulting checkpoint in the models directory, and… it didn’t work.

Second Attempt

Due to the use of keras modules (and differently named variables), the variable names were significantly different; model/h0/attn/c_attn/w in the OpenAI model was transformer/h/0/attn/c_attn/weight/.ATTRIBUTES/VARIABLE_VALUE in the huggingface tf model! I found a script for renaming tf variables in checkpoints to use as a starting point and painstakingly combed through the differences in variable names to produce this hodgepodge:

1
2
3
4
5
6
7
8
9
10
11
new_name = new_name[12:].replace('/.ATTRIBUTES/VARIABLE_VALUE', '')
new_name = new_name.replace('weight', 'w')
new_name = new_name.replace('bias', 'b')
new_name = new_name.replace('beta', 'b')
new_name = new_name.replace('gamma', 'g')
if 'wpe' in new_name:
new_name = 'wpe'
if 'wte' in new_name:
new_name = 'wte'
new_name = 'model/' + new_name
new_name = new_name.replace('/h/', '/h')

Another bizarre issue was that for the 774M model, saving would result in a protobuf error if metadata saving was enabled. We could copying the .meta over anyways, so this wasn’t a big deal, but it was an ugly kludge.

All the variables were mapped over to the right names, so I put the resulting checkpoint in the models directory, and… it didn’t work.

Third Attempt

It turns out that in the original GPT-2 model, biases are stored with shape (N) but with shape (1, N) in the TF 2.x model. Even more inexplicably, weights are stored with shape (1, N, N) in the original model while they are stored in a much more sensible (N, N) in the TF 2.x model. I had to add this bit of code to remove the extra dimension for biases but add an extra dimension for weights other than the embedding matrices:

1
2
3
4
if 'ln' in new_name or '/b' in new_name:
var = var.reshape((-1))
if '/w' in new_name and not ('wpe' in new_name or 'wte' in new_name):
var = var.reshape((1, *var.shape))

All the variables were mapped to the right sizes now, so I put the resulting checkpoint in the models directory, and… it worked!

Conclusion

Now that we can convert GPT-2 checkpoints bidirectionally between any formats, it will hopefully encourage more people to switch to the newer formats, which are, in general, much easier to work with. The next step would be to implement saving to arbitrary formats in the Huggingface transformers repository.

...