Skip to content
Daniel Hay Guest edited this page Aug 14, 2017 · 8 revisions

Saving Keras Models

Note that recent versions of Keras support two ways of saving models: one that saves the full NN to HDF5 and one that saves the architecture to json and the weights to HDF5. We only support the later, so if you saved your network with model.save(filepath) you'll have to reload it and save it with

# get the architecture as a json string
arch = model.to_json()
# save the architecture string to a file somehow, the below will work
with open('architecture.json', 'w') as arch_file:
    arch_file.write(arch)
# now save the weights as an HDF5 file
model.save_weights('weights.h5')

Inspecting Converter Input Files

If you have any issues with the keras2json.py converter it's helpful to be able to inspect the Keras outputs.

Inspecting the json file(s)

Since json is human readable, but by default it will all be saved as one string. It's much easier to read if you run

python -m json.tool <arch_file>

which will print the file in a much more readable format. This is also useful to check that your variables.json file is valid json.

Inspecting the HDF5 File

HDF5 comes with a nice utility called h5ls which can dump the contents of a file. To make this even easier, we have a tab-completion script that you can source in your setup.

lwtnn converter Input Variable File

lwtnn requires an input file specifying variables used in the training. Here are two basic examples: one for the sequential API and one for the functional API.

Note that the functional converter will built your input file for you if you leave it off when calling the converter from the command line. You may want to go through and give the variables meaningful names (other than variable_0 etc).

There are also two additional parameters that are specified for each input variable:

  • The offset values, i.e. the amount you shifted your inputs when training. A common number to use is offset = -mean.

  • The scale values, i.e. how much you scaled your inputs by. A common choice is scale = 1 / standard_deviation.

When lwtnn runs, it will apply the transformation

f(x) = (x + offset) * scale

To each input variable, before feeding the variable to the network.

Functional model input variable file

Note that we recommend using the functional API for everything. We like it so much that we built a script that generates the input file for you.

You can generate a template file by calling

kerasfunc2json.py architecture.json weights.h5 > variables.json

This gives you something like this:

{
  "input_sequences": [],
  "inputs": [
    {
      "name": "node_0",
      "variables": [
        {
          "name": "variable_0",
          "offset": 0,
          "scale": 1
        },
        {
          "name": "variable_1",
          "offset": 0,
          "scale": 1
        }
      ]
    }
  ],
  "outputs": [
    {
      "labels": [
        "out_0",
        "out_1",
        "out_2",
        "out_3"
      ],
      "name": "MyOutputName"
    }
  ]
}

Sequential model input variable file

Note that we don't recommend using the sequential models in Keras, since in general they are less flexible than the functional model.

If you are using sequential models, you can use this as a template for your variables.json file:

{
  "inputs": [
    {
      "name": "variable_0",
      "offset": 0,
      "scale": 1
    },
    {
      "name": "variable_1",
      "offset": 0,
      "scale": 1
    }
  ],
  "class_labels": ["BinaryClassificationOutputName"]
}

Clone this wiki locally