Windows Support, Breaking Changes in TensorFlow 0.12.0

Alright! The TensorFlow team continues to move at a break-neck pace, and they’ve just released the first release candidate for version 0.12.0. There’s a lot of really good news as well as several crucial breaking changes that users need to be aware of.

Let’s get started:

Windows Support

Windows users have been asking for native support ever since TensorFlow was open-sourced. Over the past few weeks, Windows related pull requests started getting merged into master, and the work has progressed far enough that the team is willing to start supporting it officially! That said, it isn’t perfect yet- here are the key things to know about current Windows support:

  • You must use Python 3.5
  • If you want to use your GPU, you must be using CUDA 8.0 and cuDNN 5.1 (sorry, experimental OpenCL support is not available on Windows yet)
  • Building your own Pip file from scratch is going to be trickier than on *nix systems. Bazel support on Windows is highly experimental, although there has been work on the TensorFlow side of things to make Bazel installation work as it does on other systems. There’s also CMake support in the contrib folder, though it’s hard to tell whether using Bazel or CMake will be the canonical way of building TensorFlow on Windows.
  • There are a few Operations that aren’t available on Windows yet- primarily quantization, gamma function, and depthwise convolutional Ops. They’re all listed in the release notes

Hopefully Windows support gets fully caught up to the rest of the project in the next few months.

Breaking Changes in the Python API

This release has some significant breaking changes to common functions in TensorFlow. For the most part, these are due to Operations being renamed or moved to different submodules within TensorFlow.

tf.initialize_all_variables is now tf.global_variables_initializer

s/initialize_all_variables/global_variables_initializer/g

This is a big change, as initialize_all_variables has been a staple in TensorFlow since it was released. However, this is best for the long run. It was never obvious that initialize_all_variables returned an Operation that needed to be run in a Session, and this change also syncs up with another adjustment:

tf.VARIABLES collection is now tf.GLOBAL_VARIABLES, and tf.all_variables is now tf.global_variables

This is a change that helps make things more explicit. If you were using the VARIABLES collection to loop through and create summaries, you’ll need to change your code up a bit. Oh- speaking of summaries:

All summaries have been renamed from tf.*_summary to tf.summary.*

If you’re an avid TensorBoard user, this is going to be annoying. In the long run, it’s probably best to not have every Operation clog the top-level namespace, and breaking summaries off into their own module makes sense. This includes the new tf.summary.FileWriter, which takes over duty from the now deprecated tf.train.SummaryWriter. On the bright side, the old operations are still around (if deprecated), so your code will still work for the time being.

batch_* linear algebra Operations have been merged with their non-batch versions

A good change for the long run, as the batch_* syntax was cumbersome and annoying for the most part. It also cleans up the terminology, as we can now think of “batch” relating specifically to training batches and features related to them (batch_normalization, batch_dim, etc)


This covers what I consider to be the most important changes in this version for the average user. Check out all the changes yourself in the official release notes

Leave a Reply

Your email address will not be published. Required fields are marked *