<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/tag/deep-learning">
  <channel>
    <title>Deep Learning</title>
    <link>https://www.linuxjournal.com/tag/deep-learning</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>ONNX: the Open Neural Network Exchange Format</title>
  <link>https://www.linuxjournal.com/content/onnx-open-neural-network-exchange-format</link>
  <description>  &lt;div data-history-node-id="1339771" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/onnx.jpg" width="800" height="599" alt="onnx logo" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/user/800928" lang="" about="https://www.linuxjournal.com/user/800928" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Braddock Gaskill&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;
An open-source battle is being waged for the soul of artificial
intelligence. It is being fought by industry titans, universities and
communities of machine-learning researchers world-wide. This article
chronicles one small skirmish in that fight: a standardized file format
for neural networks. At stake is the open exchange of data among a
multitude of tools instead of competing monolithic frameworks.
&lt;/em&gt;&lt;/p&gt;



&lt;p&gt;
The good news is that the battleground is Free and Open. None of the
big players are pushing closed-source solutions. Whether it is Keras and
Tensorflow backed by Google, MXNet by Apache endorsed by Amazon, or Caffe2
or PyTorch supported by Facebook, all solutions are open-source software.
&lt;/p&gt;
&lt;p&gt;
Unfortunately, while these projects are &lt;em&gt;open&lt;/em&gt;, they are not
&lt;em&gt;interoperable&lt;/em&gt;. Each framework constitutes a complete stack that
until recently could not interface in any way with any other framework.
A new industry-backed standard, the Open Neural Network Exchange format,
could change that.
&lt;/p&gt;

&lt;p&gt;
Now, imagine a world where you can train a neural network in Keras,
run the trained model through the NNVM optimizing compiler and
deploy it to production on MXNet. And imagine that is just one of
countless combinations of interoperable deep learning tools, including
visualizations, performance profilers and optimizers. Researchers and
DevOps no longer need to compromise on a single toolchain that provides
a mediocre modeling environment and so-so deployment performance.
&lt;/p&gt;

&lt;p&gt;
What is required is a standardized format that can express any machine-learning model and store trained parameters and weights, readable and
writable by a suite of independently developed software.
&lt;/p&gt;

&lt;p&gt;
Enter the &lt;a href="http://onnx.ai"&gt;Open Neural Network Exchange
Format&lt;/a&gt; (ONNX).
&lt;/p&gt;

&lt;h3&gt;
The Vision&lt;/h3&gt;

&lt;p&gt;
To understand the drastic need for interoperability with a standard like
ONNX, we first must understand the ridiculous requirements we have for
existing monolithic frameworks.
&lt;/p&gt;

&lt;p&gt;
A casual user of a deep learning framework may think of it as a language
for specifying a neural network. For example, I want 100 input neurons,
three fully connected layers each with 50 ReLU outputs, and a softmax on
the output. My framework of choice has a domain language to specify this
(like Caffe) or bindings to a language like Python with a clear API.
&lt;/p&gt;

&lt;p&gt;
However, the specification of the network architecture is only the tip of
the iceberg. Once a network structure is defined, the framework still
has a great deal of complex work to do to make it run on your CPU or
GPU cluster.
&lt;/p&gt;

&lt;p&gt;
Python, obviously, doesn't run on a GPU. To make your network definition
run on a GPU, it needs to be compiled into code for the CUDA (NVIDIA) or
OpenCL (AMD and Intel) APIs or processed in an efficient way if running
on a CPU. This compilation is complex and why most frameworks don't
support both NVIDIA and AMD GPU back ends.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/onnx-open-neural-network-exchange-format" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 25 Apr 2018 14:19:00 +0000</pubDate>
    <dc:creator>Braddock Gaskill</dc:creator>
    <guid isPermaLink="false">1339771 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
