top of page
Writer's pictureSergiy Tkachuk

10.000 feet overview of TensorFlow 2.0 Alpha

Updated: Apr 16, 2020

You might have heard of brand new TensorFlow 2.0 release last month and it is super exciting to get hands dirty in the upgraded version of popular deep learning framework. There are some really cool new features that make the coding faster and syntax more transparent.


In this blog post, I would love to share a broad overview of what was changed and enhanced in the latest Alpha release. I started using TensorFlow relatively short time ago and a lot of things such as session.run() were not necessarily intuitive for me from the very beginning. From my perspective, TF 2.0 makes deep learning much more friendly.


Keras as a high-level API

Keras was extended so now it will possible to use all advanced features directly from tf.keras. These include support for eager execution (described later in the post) for faster iteration and easier debugging, integration support for distributed training, SavedModel model exchange format.

For those who are still wondering – Keras is not a wrapper for TensorFlow or other libraries.

Keras is an API standard for defining and training machine learning models. Keras is not tied to a specific implementation: The Keras API has implementations for TensorFlow, MXNet, TypeScript, JavaScript, CNTK, Theano, PlaidML, Scala, CoreML, and other libraries.

In addition to extending Keras API, developers also cleaned up a lot and some of the APIs are either gone or moved in TF 2.0. Lesser used functions were moved from the main tf.* space to the subpackages.

Eager execution by default

Eager execution is another big change in TensorFlow 2.0. In contrast to the surrounding code, declarative style of the previous TF versions was a bit counterintuitive and not always very convenient to program. Eager execution makes this inconvenience disappear – TensorFlow 2.0 behaves just like the surrounding code (adding up two number is much simpler now 🙂 ).


a = tf.constant([[1, 2],[3, 4]])
b = tf.add(a, 1)
print(b)

Output: tf.Tensor(  [[2 3]  [4 5]], shape=(2, 2), dtype=int32)

This, obviously, does not mean that you lose an ability to leverage the power of robust program serialization, easy distribution of programs or optimization on a graph. These features are just easier with the latest release 😉


session.run()

I think this one is my favorite! As I mention in the head of the post – it was not obvious for me to use this clause.


# TensorFlow 1.X

outputs = session.run(f(placeholder),
                      feed_dict={placeholder: input})

# TensorFlow 2.0

outputs = f(input)

In TF 2.0 you can decorate a Python function using tf.function(). Of course, all the benefits of the graph mode are still there.

A simple example shows how to use the bespoke decorator:


def dense(x, W, b):
    return tf.nn.sigmoid(tf.matmul(x, W) + b)

@tf.function
def multilayer_perceptron(x, w0, b0, w1, b1, w2, b2 ...):
    x = dense(x, w0, b0)
    x = dense(x, w1, b1)
    x = dense(x, w2, b2)
    ...

Summing up

The new version of TensorFlow brings ease and transparency to the code structuring and model training. It will still take time to fine-tune all features, as it is just an Alpha release. But I believe that stable version will speed up deep learning progress as it will enable more people to contribute and engage.

There are a couple of additional enhancements definitely worth checking out, e.g. tf.data for effective pipelines and transitions guidelines for TensorFlow 1.X users. The link in reference part of the post will provide you with an extensive description of what is new. Also, some extremely useful features will come with new TensoarBoard.


I also encourage you to check out the playlist from Dev Summit 2019.


References:


25 views0 comments

Recent Posts

See All

תגובות


bottom of page