Doodles for Visualizing 1D Convolutions in TensorFlow

A guy named Evan asked a question about how to imagine 1D convolutions in TensorFlow. I responded with some hand-drawn sketches (which turned out pretty clean!). They might be useful to others in the future, so I figured this would be a decent stop-gap as I haven’t posted anything here in a while.

This is as good as time as any to mention that I’m working on a post whose intention is to visualize the convolution operation in more detail, emphasizing the intuitive 3D nature (and focusing less on the specific numbers).

Page 1 – Seeing 1D convolutions as a special case of 2D convolutions

Page 2 – visualizing the convolution operation

Read More

Windows Support, Breaking Changes in TensorFlow 0.12.0

Alright! The TensorFlow team continues to move at a break-neck pace, and they’ve just released the first release candidate for version 0.12.0. There’s a lot of really good news as well as several crucial breaking changes that users need to be aware of.

Let’s get started:

Windows Support

Windows users have been asking for native support ever since TensorFlow was open-sourced. Over the past few weeks, Windows related pull requests started getting merged into master, and the work has progressed far enough that the team is willing to start supporting it officially! That said, it isn’t perfect yet- here are the key things to know about current Windows support:

  • You must use Python 3.5
  • If you want to use your GPU, you must be using CUDA 8.0 and cuDNN 5.1 (sorry, experimental OpenCL support is not available on Windows yet)
  • Building your own Pip file from scratch is going to be trickier than on *nix systems. Bazel support on Windows is highly experimental, although there has been work on the TensorFlow side of things to make Bazel installation work as it does on other systems. There’s also CMake support in the contrib folder, though it’s hard to tell whether using Bazel or CMake will be the canonical way of building TensorFlow on Windows.
  • There are a few Operations that aren’t available on Windows yet- primarily quantization, gamma function, and depthwise convolutional Ops. They’re all listed in the release notes

Hopefully Windows support gets fully caught up to the rest of the project in the next few months.

Breaking Changes in the Python API

This release has some significant breaking changes to common functions in TensorFlow. For the most part, these are due to Operations being renamed or moved to different submodules within TensorFlow.

tf.initialize_all_variables is now tf.global_variables_initializer

s/initialize_all_variables/global_variables_initializer/g

This is a big change, as initialize_all_variables has been a staple in TensorFlow since it was released. However, this is best for the long run. It was never obvious that initialize_all_variables returned an Operation that needed to be run in a Session, and this change also syncs up with another adjustment:

tf.VARIABLES collection is now tf.GLOBAL_VARIABLES, and tf.all_variables is now tf.global_variables

This is a change that helps make things more explicit. If you were using the VARIABLES collection to loop through and create summaries, you’ll need to change your code up a bit. Oh- speaking of summaries:

All summaries have been renamed from tf.*_summary to tf.summary.*

If you’re an avid TensorBoard user, this is going to be annoying. In the long run, it’s probably best to not have every Operation clog the top-level namespace, and breaking summaries off into their own module makes sense. This includes the new tf.summary.FileWriter, which takes over duty from the now deprecated tf.train.SummaryWriter. On the bright side, the old operations are still around (if deprecated), so your code will still work for the time being.

batch_* linear algebra Operations have been merged with their non-batch versions

A good change for the long run, as the batch_* syntax was cumbersome and annoying for the most part. It also cleans up the terminology, as we can now think of “batch” relating specifically to training batches and features related to them (batch_normalization, batch_dim, etc)


This covers what I consider to be the most important changes in this version for the average user. Check out all the changes yourself in the official release notes

Read More

First TensorFlow Book is Published!

Man, it has been a while since I posted anything!

I’ve emerged from my writer’s cave (along with my co-authors), and have emerged with TensorFlow for Machine Intelligence! In my extremely biased opinion, it’s one of the best resources out there for people trying to get started with TensorFlow. I had a heavy hand in helping design the content of the book, and I wanted to make sure that learning the software was as digestible as possible. I’m not a huge fan of marketing buzz-speak, but one of our reviewers, Guillaume Binet, says

“Finally a TensorFlow book for humans.”

TensorFlow for Machine Intelligence
The book’s snazzy cover!

Mission Accomplished Sort Of!

I Have no idea how it’s going to sell, but hopefully a decent amount of people find the book helpful. Right now, a huge amount of the reward from this process has been getting even more deeply acquainted with the TensorFlow library. Plus, I can literally say “I wrote the book on it”.

I’ve been working on this with the fine folks at Bleeding Edge Press for several months, and while for a book the timeframe has been pretty short, this release feels like a long time coming.

Special thanks to my awesome co-authors Danijar Hafner, Erik Erwitt, and Ariel Scarpinelli. We had a few tight deadlines, but at the end of the day we’ve got an awesome book!

Read More

TensorFlow on Raspberry Pi: Just in Time for Pi Day!

This work was truly a team effort, so please check out the credits of the repo and give everyone there a warm e-hug.

TensorFlow gets smaller as it is getting bigger

Earlier today, I released instructions for compiling TensorFlow on the Raspberry Pi 3, as well as a pre-built Python wheel that can be used to install it directly. I’m hoping that this will enable some really cool projects involving both portable GPIO device-based learning and experimentation with TensorFlow’s distributed runtime. This has been an effort that has gone on since TensorFlow was open-sourced, and I’m really happy to be part of the group of people that made it happen.

What’s in the Repo

There are two main attractions to the repository: a pre-built Python wheel that can be used to easily install TensorFlow on a Raspberry Pi 3, and a step-by-step guide to building TensorFlow yourself.

Why Bother?

Several people have asked similar questions along the lines of: “Why would you want to run TensorFlow on a Raspberry Pi? Its compute power is miniscule.”

The first, quick answer: you probably don’t want to train your sophisticated models on a Raspberry Pi. Instead, train the model on a computer with more processing power (both CPU and GPU), and then move that pre-trained model onto the Pi for real-time use.

The second, more verbose answer:

With so much focus on the insane amount of computing power some companies are using to create breakthroughs in machine learning, such as Google’s AlphaGo recently beating Go champion Lee Sedol, it’s easy to get caught in the mindset that the only worthwhile machine learning problems require hundreds of GPUs. In truth, there are many applications that are on the opposite end of the spectrum- embedded devices with limited memory and processing power can also take advantage of machine learning. Unfortunately, you can’t keep throwing more hardware in a device smaller than your hand. We don’t want to require our smart health-monitors to be hooked up to the internet in order to detect anomalies- they should be able to use some sort of model built into the device. The hope is that by having a widely cross-compatible and powerful framework, the barrier to making ML-capable devices will be lowered and installing pre-trained models becomes less of a headache on devices with limited hardware.

Plus, having access to GPIO sensors and other devices could enable some really cool prototypes for machine learning that incorporate realtime data from their surroundings. Good stuff all around!

So check it out, let me know what works (and what doesn’t work), and let’s keep making TensorFlow a kick-ass framework with a kick-community.

Read More

TensorFlow Serving: TensorFlow in Production

More good news!

Google announced today that they’re releasing TensorFlow Serving a way to maintain machine learning models that are defined and trained in TensorFlow.

In a typical production setting, you want a way to replace in new models, or possibly train a model online. However, you need to be able to make this process seamless, or else risk losing service for a period of time. TensorFlow Serving offers a framework that manages the behind the scenes work of this process, so that the user can spend more time trying out new models. It’s in its infancy right now, but I’m excited to try it out soon.

Read More