<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/tag/docker">
  <channel>
    <title>Docker</title>
    <link>https://www.linuxjournal.com/tag/docker</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>Build a Versatile OpenStack Lab with Kolla</title>
  <link>https://www.linuxjournal.com/content/build-versatile-openstack-lab-kolla</link>
  <description>  &lt;div data-history-node-id="1340736" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/bigstock-Computing-Cloud-Technology-Dat-268233799.jpg" width="900" height="506" alt="""" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/john-s-tonello" lang="" about="https://www.linuxjournal.com/users/john-s-tonello" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;John S. Tonello&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Hone your OpenStack skills with a full deployment in a single virtual machine.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
It's hard to go anywhere these days without hearing something about the urgent
need to deploy on-premises cloud environments that are agile, flexible and don't
cost an arm and a leg to build and maintain, but getting your hands on a real
OpenStack cluster—the de facto standard—can be downright impossible.
&lt;/p&gt;

&lt;p&gt;
Enter Kolla-Ansible, an official OpenStack project that allows you to
deploy a complete cluster successfully—including Keystone, Cinder, Neutron,
Nova, Heat and Horizon—in Docker containers on a single, beefy virtual
machine. It's actually just one of an emerging group of official OpenStack
projects that containerize the OpenStack control plane so users can deploy
complete systems in containers and Kubernetes.
&lt;/p&gt;

&lt;p&gt;
To date, for those who don't happen to have a bunch of extra servers loaded
with RAM and CPU cores handy, DevStack has served as the go-to OpenStack lab
environment, but it comes with some limitations. Key among those is your
inability to reboot a DevStack system effectively. In fact, rebooting generally
bricks your instances and renders the rest of the stack largely unusable.
DevStack also limits your ability to experiment beyond core OpenStack modules,
where Kolla lets you build systems that can mimic full production capabilities,
make changes and pick up where you left off after a shutdown.
&lt;/p&gt;

&lt;p&gt;
In this article, I explain how to deploy Kolla, starting from the initial
configuration of your laptop or workstation, to configuration of your cluster,
to putting your OpenStack cluster into service.
&lt;/p&gt;

&lt;h3&gt;
Why OpenStack?&lt;/h3&gt;

&lt;p&gt;
As organizations of all shapes and sizes look to speed development and
deployment of mission-critical applications, many turn to public clouds like
Amazon Web Services (AWS), Microsoft Azure, Google Compute Engine, RackSpace
and many others. All make it easy to build the systems you and your
organization need quickly. Still, these public cloud services come at a
price—sometimes a steep price you only learn about at the end of a billing cycle.
Anyone in your organization with a credit card can spin up servers, even ones
containing proprietary data and inadequate security safeguards.
&lt;/p&gt;

&lt;p&gt;
OpenStack, a community-driven open-source project with thousands of developers
worldwide, offers a robust, enterprise-worthy alternative. It gives you the
flexibility of public clouds in your own data center. In many ways, it's also
easier to use than public clouds, particularly when OpenStack administrators
properly set up networks, carve out storage and compute resources, and provide
self-service capabilities to users. It also has tons of add-on capabilities to
suit almost any use case you can imagine. No wonder 75% of private
clouds are built using OpenStack.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/build-versatile-openstack-lab-kolla" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 07 Aug 2019 17:30:00 +0000</pubDate>
    <dc:creator>John S. Tonello</dc:creator>
    <guid isPermaLink="false">1340736 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Sharing Docker Containers across DevOps Environments</title>
  <link>https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments</link>
  <description>  &lt;div data-history-node-id="1340036" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/docker-logo.png" width="800" height="400" alt="docker" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/todd-jacobs" lang="" about="https://www.linuxjournal.com/users/todd-jacobs" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Todd A. Jacobs&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Docker provides a powerful tool for creating lightweight images and
containerized processes, but did you know it can make your development
environment part of the DevOps pipeline too? Whether you're managing
tens of thousands of servers in the cloud or are a software engineer looking
to incorporate Docker containers into the software development life
cycle, this article has a little something for everyone with a passion
for Linux and Docker.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
In this article, I describe how Docker containers flow
through the DevOps pipeline. I also cover some advanced DevOps
concepts (borrowed from object-oriented programming) on how to use
dependency injection and encapsulation to improve the DevOps process.
And finally, I show how containerization can be useful for the
development and testing process itself, rather than just as a
place to serve up an application after it's written.
&lt;/p&gt;


&lt;h3&gt;
Introduction&lt;/h3&gt;

&lt;p&gt;
Containers are hot in DevOps shops, and their benefits from an
operations and service delivery point of view have been covered well
elsewhere. If you want to build a Docker container or deploy a Docker
host, container or swarm, a lot of information is available.
However, very few articles talk about how to &lt;em&gt;develop&lt;/em&gt; inside the Docker
containers that will be reused later in the DevOps pipeline, so that's what
I focus on here.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_650x650/public/u%5Buid%5D/12282f1%281%29.png" width="650" height="130" alt="""" class="image-max_650x650" /&gt;&lt;p&gt;&lt;em&gt;Figure 1.
Stages a Docker Container Moves Through in a Typical DevOps
Pipeline&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
Container-Based Development Workflows&lt;/h3&gt;

&lt;p&gt;
Two common workflows exist for developing software for use inside Docker
containers:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
Injecting development tools into an existing Docker container:
this is the best option for sharing a consistent development environment
with the same toolchain among multiple developers, and it can be used in
conjunction with web-based development environments, such as Red Hat's
codenvy.com or dockerized IDEs like Eclipse Che.
&lt;/li&gt;

&lt;li&gt;
Bind-mounting a host directory onto the Docker container and using your
existing development tools on the host:
this is the simplest option, and it offers flexibility for developers
to work with their own set of locally installed development tools.
&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Both workflows have advantages, but local mounting is inherently simpler. For
that reason, I focus on the mounting solution as "the simplest
thing that could possibly work" here.
&lt;/p&gt;

&lt;p&gt;
&lt;strong&gt;How Docker Containers Move between Environments&lt;/strong&gt;
&lt;/p&gt;

&lt;p&gt;
A core tenet of DevOps is that the source code and runtimes that will be used
in production are the same as those used in development. In other words, the
most effective pipeline is one where the identical Docker image can be reused
for each stage of the pipeline.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/sharing-docker-containers-across-devops-environments" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 18 Dec 2018 13:00:00 +0000</pubDate>
    <dc:creator>Todd A. Jacobs</dc:creator>
    <guid isPermaLink="false">1340036 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Containers, Part III: Orchestration with Kubernetes</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes</link>
  <description>  &lt;div data-history-node-id="1339997" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/Kubernetes_%28container_engine%29_2.png" width="800" height="400" alt="Kubernetes" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;A look at using Kubernetes to create, deploy and manage thousands of
container images.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
If you've read the first two articles in this series, you now should be familiar with &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Linux kernel control groups (Part I)&lt;/a&gt;,
&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc"&gt;Linux Containers and Docker (Part II)&lt;/a&gt;. But, here's a quick recap: once upon a time, data-center
administrators deployed entire operating systems, occupying entire hardware
servers to host a few applications each. This was a lot of overhead with a
lot to manage. Now scale that across multiple server hosts, and it increasingly
became more difficult to maintain. This was a problem—a problem that
wasn't
easily solved. It would take time for technological evolution to reach
the moment where you are able to shrink the operating system and launch
these varied applications as microservices hosted across multiple containers
on the same physical machine.
&lt;/p&gt;

&lt;p&gt;
In the final part of this series, I explore the method
most people use to create, deploy and manage containers. The concept is typically
referred to as container orchestration. If I were to focus on Docker, on its
own, the technology is extremely simple to use, and running a few images
simultaneously is also just as easy. Now, scale that out to hundreds, if not
thousands, of images. How do you manage that? Eventually, you need to step
back and rely on one of the few orchestration frameworks specifically
designed to handle this problem. Enter Kubernetes.
&lt;/p&gt;

&lt;h3&gt;
Kubernetes&lt;/h3&gt;

&lt;p&gt;
Kubernetes, or k8s (k + eight characters), originally was developed by
Google. It's an open-source platform aiming to automate container operations:
"deployment, scaling and operations of application containers across
clusters of hosts". Google was an early adopter and contributor to the
Linux Container technology (in fact, Linux Containers power
Google's very own cloud services). Kubernetes eliminates all of the
manual processes involved in the deployment and scaling of containerized
applications. It's capable of clustering together groups of servers hosting
Linux Containers while also allowing administrators to manage those
clusters easily and efficiently.
&lt;/p&gt;

&lt;p&gt;
Kubernetes makes it possible to respond to consumer demands quickly by
deploying your applications within a timely manner, scaling those same
applications with ease and seamlessly rolling out new features, all while
limiting hardware resource consumption. It's extremely modular and can
be hooked into by other applications or frameworks easily. It also provides
additional self-healing services, including auto-placement,
auto-replication and auto-restart of containers.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-containers-part-iii-orchestration-kubernetes" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 28 Nov 2018 12:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339997 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Everything You Need to Know about Linux Containers, Part II: Working with Linux Containers (LXC)</title>
  <link>https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc</link>
  <description>  &lt;div data-history-node-id="1339992" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/bigstock--211838674_0.jpg" width="800" height="533" alt="""" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/petros-koutoupis" lang="" about="https://www.linuxjournal.com/users/petros-koutoupis" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Petros Koutoupis&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;&lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-i-linux-control-groups-and-process"&gt;Part I of this Deep Dive on containers&lt;/a&gt; introduces
the idea of kernel control groups, or cgroups, and the way you can isolate,
limit and monitor selected userspace applications. Here,
I dive a bit deeper and focus on the next step of process
isolation—that is, through containers, and more specifically, the Linux
Containers (LXC) framework.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
Containers are about as close to bare metal as you can get when running
virtual machines. They impose very little to no overhead when hosting virtual
instances. First introduced in 2008, LXC adopted much of its functionality
from the Solaris Containers (or Solaris Zones) and FreeBSD jails that
preceded it. Instead of creating a full-fledged virtual machine, LXC enables
a virtual environment with its own process and network space. Using
namespaces to enforce process isolation and leveraging the kernel's very own
control groups (cgroups) functionality, the feature limits, accounts for and isolates CPU, memory, disk I/O and network usage of one or more
processes. Think of this userspace framework as a very advanced form of
&lt;code&gt;chroot&lt;/code&gt;.
&lt;/p&gt;

&lt;p&gt;
Note: LXC uses namespaces to enforce process isolation, alongside the kernel's very
own cgroups to account for and limit CPU, memory, disk I/O and network usage
across one or more processes.
&lt;/p&gt;

&lt;p&gt;
But what exactly are containers? The short answer is that containers decouple software
applications from the operating system, giving users a clean and minimal
Linux environment while running everything else in one or more isolated
"containers". The purpose of a container is to launch a limited set
of applications or services (often referred to as microservices) and have
them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;p&gt;
Note: the purpose of a container is to launch a limited set of applications or
services and have them run within a self-contained sandboxed environment.
&lt;/p&gt;

&lt;img src="https://www.linuxjournal.com/sites/default/files/styles/max_1300x1300/public/u%5Buid%5D/ContainerModel.png" width="557" height="250" alt="""" class="image-max_1300x1300" /&gt;&lt;p&gt;
&lt;em&gt;Figure 1. A Comparison of
Applications Running in a Traditional Environment to Containers&lt;/em&gt;
&lt;/p&gt;

&lt;p&gt;
This isolation prevents processes running within a given container from
monitoring or affecting processes running in another container. Also, these
containerized services do not influence or disturb the host machine. The idea
of being able to consolidate many services scattered across multiple physical
servers into one is one of the many reasons data centers have chosen to adopt
the technology.
&lt;/p&gt;

&lt;p&gt;
Container features include the following:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/everything-you-need-know-about-linux-containers-part-ii-working-linux-containers-lxc" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 27 Aug 2018 11:30:00 +0000</pubDate>
    <dc:creator>Petros Koutoupis</dc:creator>
    <guid isPermaLink="false">1339992 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>The Search for a GUI Docker</title>
  <link>https://www.linuxjournal.com/content/search-gui-docker</link>
  <description>  &lt;div data-history-node-id="1339996" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/dockerlogo.png" width="800" height="400" alt="docker" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;&lt;em&gt;Docker is everything but pretty; let's try to fix that. Here's a rundown of 
some GUI options available for Docker.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;
I love Docker. At first it seemed a bit silly to me for a small-scale
implementation like my home setup, but after learning how to use it, I fell
in love. The standard features are certainly beneficial. It's great not
worrying that one application's dependencies will step on or conflict
with another's. But most applications are good about playing well with
others, and package management systems keep things in order. So why do I
&lt;code&gt;docker run&lt;/code&gt; instead of &lt;code&gt;apt-get install&lt;/code&gt;? Individualized system settings.
&lt;/p&gt;

&lt;p&gt;
With Docker, I can have three of the same apps running side by side. They
even can use the same port (internally) and not conflict. My torrent
client can live inside a forced-VPN network, and I don't need to worry that it will
somehow "leak" my personal IP data. Heck, I can run apps that work
only on CentOS inside my Ubuntu Docker server, and it just works! In short,
Docker is amazing.
&lt;/p&gt;

&lt;p&gt;
I just wish I could remember all the commands.
&lt;/p&gt;

&lt;p&gt;
Don't get me wrong, I'm familiar with Docker. I use it for most of my
server needs. It's my first go-to when testing a new app. Heck, I taught
an entire course on Docker for CBT Nuggets (my day job). The problem is,
Docker works so well, I rarely need to interact with it. So, my FIFO
buffer fills up, and I forget the simple command-line options to make
Docker work. Also, because I like charts and graphs, I decided to install
a Docker GUI. It was a bit of an adventure, so I thought I'd share the
ins and outs of my experience.
&lt;/p&gt;

&lt;h3&gt;
My GUI Expectations&lt;/h3&gt;

&lt;p&gt;
There are some things I don't really care about for a GUI. Oddly, one of
the most common uses people have for a visual interface is the ability to
create a Docker container. I actually don't mind using the command line
when I'm creating a container, because it usually takes 5–10 attempts
and tweaks before I get it how I want it. So for me, I'd like to have
at least the following features:
&lt;/p&gt;

&lt;ul&gt;&lt;li&gt;
A visual layout of all containers, whether or not they're running.
&lt;/li&gt;

&lt;li&gt;
A way to start/stop/delete containers.
&lt;/li&gt;
&lt;li&gt;
The ability to rename running containers, because I always forget to name
them, and I get tired of seeing "chubby_cheetah" for container names.
&lt;/li&gt;

&lt;li&gt;
A way to change the restart policy easily, so when I finally get a container
right, I can have it &lt;code&gt;--restart=always&lt;/code&gt;.
&lt;/li&gt;

&lt;li&gt;
Show some statistics about the system and individual containers.
&lt;/li&gt;

&lt;li&gt;
Read logs.
&lt;/li&gt;

&lt;li&gt;
Work via web interface, so I can use it remotely.
&lt;/li&gt;

&lt;li&gt;
Be a Docker container itself!
&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;
My list of needs is fairly simple, but oddly, many GUIs left me
wanting. Since everyone's desires are different, I'll go over the most
popular options I tried, and mention some pros and cons.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/search-gui-docker" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 31 Jul 2018 12:00:00 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1339996 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>FOSS Project Spotlight: Pydio Cells, an Enterprise-Focused File-Sharing Solution</title>
  <link>https://www.linuxjournal.com/content/foss-project-spotlight-pydio-cells-enterprise-focused-file-sharing-solution</link>
  <description>  &lt;div data-history-node-id="1339956" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/bigstock-Icon-Of-Exchanging-Files-Conc-231776218.jpg" width="600" height="600" alt="""" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/italo-vignoli" lang="" about="https://www.linuxjournal.com/users/italo-vignoli" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Italo Vignoli&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Pydio Cells is a brand-new product focused on the needs of enterprises and
large organizations, brought to you from the people who launched the concept
of the open-source
file sharing and synchronization solution in 2008. The concept behind
Pydio Cells is challenging: to be to file sharing what Slack has been to
chats—that is, a revolution in terms of the number of features, power and ease of
use.
&lt;/p&gt;

&lt;p&gt;
In order to reach this objective, Pydio's development team has switched
from the old-school development stack (Apache and PHP) to Google's Go
language to overcome the bottleneck represented by legacy technologies.
Today, Pydio Cells offers a faster, more scalable microservice architecture
that is in tune with dynamic modern enterprise environments.
&lt;/p&gt;

&lt;p&gt;
In fact, Pydio's new "Cells" concept delivers file sharing as a
modern collaborative app. Users are free to create flexible group spaces for
sharing based on their own ways of working with dedicated in-app messaging
for improved collaboration.
&lt;/p&gt;

&lt;p&gt;
In addition, the enterprise data management functionality gives both
companies and administrators reassurance, with controls and reporting that
directly answer corporate requirements around the General Data Protection
Regulation (GDPR) and other tightening data
protection regulations.
&lt;/p&gt;

&lt;h3&gt;
Pydio Loves DevOps&lt;/h3&gt;

&lt;p&gt;
In tune with modern enterprise DevOps environments, Pydio Cells now runs as
its own application server (offering a dependency-free binary, with no need for
external libraries or runtime environments). The application is available as
a Docker image, and it offers out-of-the-box connectors for
containerized application orchestrators, such as Kubernetes.
&lt;/p&gt;

&lt;p&gt;
Also, the application has been broken up into a series of logical
microservices. Within this new architecture, each service is allocated its
own storage and persistence, and can be scaled independently. This enables
you to manage and scale Pydio
more efficiently, allocating resources to each
specific service.
&lt;/p&gt;

&lt;p&gt;
The move to Golang has delivered a ten-fold improvement in performance. At
the same time, by breaking the application into logical microservices, larger
users can scale the application by targeting greater resources only to the
services that require it, rather than inefficiently scaling the entire
solution.
&lt;/p&gt;

&lt;h3&gt;
Built on Standards&lt;/h3&gt;

&lt;p&gt;
The new Pydio Cells architecture has been built with a renewed focus on the
most popular modern open standards:
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/foss-project-spotlight-pydio-cells-enterprise-focused-file-sharing-solution" hreflang="en"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Fri, 13 Jul 2018 14:20:00 +0000</pubDate>
    <dc:creator>Italo Vignoli</dc:creator>
    <guid isPermaLink="false">1339956 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Managing Docker Instances with Puppet</title>
  <link>https://www.linuxjournal.com/content/managing-docker-instances-puppet</link>
  <description>  &lt;div data-history-node-id="1339445" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/Puppet%27s_company_logo.png" width="600" height="211" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/todd-jacobs" lang="" about="https://www.linuxjournal.com/users/todd-jacobs" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Todd A. Jacobs&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In a previous article, "Provisioning Docker with Puppet", in the December
2016 issue, I covered one of the ways
you can install the Docker service onto a new system with Puppet. By
contrast, this article focuses on how to manage Docker images and
containers with Puppet.
&lt;/p&gt;

&lt;h3&gt;
Reasons for Integrating Docker with Puppet
&lt;/h3&gt;

&lt;p&gt;
There are three core use cases for integrating Docker with Puppet or
with another configuration management tool, such as Chef or Ansible:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
&lt;p&gt;
Using configuration management to provision the Docker service on a
host, so that it is available to manage Docker instances.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Adding or removing specific Docker instances, such as a containerized
web server, on managed hosts.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Managing complex or dynamic configurations inside Docker
containers using configuration management tools (for example, Puppet agent)
baked into the Docker image.
&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
"Provisioning Docker with Puppet", in the December 2016 issue
of &lt;em&gt;LJ&lt;/em&gt;, covered the first use case. This article is
primarily concerned with the second.
&lt;/p&gt;

&lt;p&gt;
Container management with Puppet allows you to do a number of things that
become ever more important as an organization scales up its systems,
including the following:
&lt;/p&gt;

&lt;ol&gt;&lt;li&gt;
&lt;p&gt;
Leveraging the organization's existing configuration management
framework, rather than using a completely separate process just to
manage Docker containers.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Treating Docker containers as "just another resource" to converge in
the configuration management package/file/service lifecycle.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Installing Docker containers automatically based on hostname, node
classification or node-specific facts.
&lt;/p&gt;&lt;/li&gt;

&lt;li&gt;
&lt;p&gt;
Orchestrating commands inside Docker containers on multiple hosts.
&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;
Although there certainly are other ways to achieve those goals (see
the Picking a Toolchain sidebar), it takes very little work to extend
your existing Puppet infrastructure to handle containers as part of a
node's role or profile. That's the focus for this article.
&lt;/p&gt;

&lt;h3&gt;
Picking a Toolchain
&lt;/h3&gt;

&lt;p&gt;
Why focus on container management with Puppet? There certainly are other
ways to manage Docker instances, containers and clusters, including
some native to Docker itself. As with any other IT endeavor, your chosen
toolchain both provides and limits your capabilities. For a home system,
your choice of toolchain is largely a matter of taste, but in the
data center, it's often better to leverage existing tools and in-house
expertise whenever possible.
&lt;/p&gt;

&lt;p&gt;
Puppet was chosen for this series of articles because it is a strong
enterprise-class solution that has been widely deployed for more than a
decade. However, you could do much the same thing with Chef or Ansible
if you choose.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/managing-docker-instances-puppet" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 20 Jul 2017 13:40:30 +0000</pubDate>
    <dc:creator>Todd A. Jacobs</dc:creator>
    <guid isPermaLink="false">1339445 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Applied Expert Systems, Inc.'s CleverView for TCP/IP on Linux</title>
  <link>https://www.linuxjournal.com/content/applied-expert-systems-incs-cleverview-tcpip-linux-0</link>
  <description>  &lt;div data-history-node-id="1339440" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/12202f2.png" width="500" height="212" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/james-gray" lang="" about="https://www.linuxjournal.com/users/james-gray" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;James Gray&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
The contemporary data center is typified by an ever-increasing amount of traffic
occurring between servers, observes &lt;a href="http://www.aesclever.com"&gt;Applied Expert Systems, Inc.&lt;/a&gt; (AES), sagely.
Fulfilling the logical need to facilitate improved server-to-server
communications, AES created CleverView for TCP/IP on Linux, now at v2.7.
CleverView provides IT staff access to current and historical server
performance and availability details from not only their browser desktops but
also their mobile phones via the CLEVER Mobile for Linux app. 
&lt;/p&gt;

&lt;p&gt;
This version 2.7
features enhancements to DockerView, namely container details including
resource utilization and process information, with the ability to drill down
into specific containers, and image details, including repository and image ID
with historical details. 
&lt;/p&gt;

&lt;p&gt;
Finally, new options to the Enhanced Dashboard include
the ability to download a graph image, manipulate graph formats, display raw
data and a zoom feature with one-click navigation to view Alert Details from
the Alerts Summary graph.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/applied-expert-systems-incs-cleverview-tcpip-linux-0" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 12 Jul 2017 15:13:24 +0000</pubDate>
    <dc:creator>James Gray</dc:creator>
    <guid isPermaLink="false">1339440 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>AWS Quickstart for Kubernetes</title>
  <link>https://www.linuxjournal.com/content/aws-quickstart-kubernetes</link>
  <description>  &lt;div data-history-node-id="1339434" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/1-6FirDRqa828LdvbLdkpXNw.png" width="800" height="519" alt="AWS Quickstart for Kubernetes" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/craig-mcluckie-0" lang="" about="https://www.linuxjournal.com/users/craig-mcluckie-0" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Craig McLuckie&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;Kubernetes is an open-source cluster manager that makes it easy to run Docker and other containers in production environments of all types (on-premises or in the public cloud). What is now an open community project came from development and operations patterns pioneered at Google to manage complex systems at internet scale.
&lt;p&gt;
&lt;/p&gt;
&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u800391/1-6FirDRqa828LdvbLdkpXNw.png" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
&lt;/p&gt;
AWS Quick Starts are a simple and convenient way to deploy popular open-source software solutions on Amazon’s infrastructure. While the current Quick Start is appropriate for development workflows and small team use, we are committed to continuing our work with the Amazon solutions architects to ensure that it captures operations and architectural best practices. It should be easy to get started now, and achieve long term operational sustainability as the Quick Start grows.
&lt;p&gt;
&lt;/p&gt;
Our hope is that you will be able to use the &lt;a href="https://github.com/heptio/aws-quickstart/blob/master/templates/kubernetes-cluster-with-new-vpc.template"&gt;CloudFormation template&lt;/a&gt; and &lt;a href="https://s3.amazonaws.com/quickstart-reference/heptio/latest/doc/heptio-kubernetes-on-the-aws-cloud.pdf"&gt;written guide&lt;/a&gt; to get going quickly with Kubernetes. Or, wire the Quick Start template into CloudFormation templates you already have, bringing Cloud Native Computing elements on Amazon’s infrastructure to your existing solutions.
&lt;p&gt;
&lt;/p&gt;
It is also worth mentioning that the AWS Quick Start represents our first upstream-friendly, supported configuration. At Heptio we are working hard to make Kubernetes more accessible to developers everywhere, and to provide quality support and services to Kubernetes users who want a clean, friendly, supported configuration of the upstream open-source project.
&lt;p&gt;
&lt;/p&gt;
You can expect to see us put the work into maintaining and enhancing this Quick Start. We also view it as a way to help other key members of the Kubernetes ecosystem deliver value on the Amazon platform. We believe it will “take a village” to bring the full potential of Cloud Native Computing to the enterprise, so we are passionate about helping our partners realize the full potential of their technology on a convenient Kubernetes base.
&lt;p&gt;
&lt;/p&gt;
Working with our friends at &lt;a href="https://www.tigera.io/"&gt;Tigera&lt;/a&gt;, we have integrated Project Calico into the AWS Quick Start so you have production-ready, secure networking right out of the box. Check out their Calico for Kubernetes guide &lt;a href="http://www.projectcalico.org/hqs"&gt;here&lt;/a&gt;.
&lt;p&gt;
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/aws-quickstart-kubernetes" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 28 Jun 2017 16:27:03 +0000</pubDate>
    <dc:creator>Craig McLuckie</dc:creator>
    <guid isPermaLink="false">1339434 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>SQL Server on Linux</title>
  <link>https://www.linuxjournal.com/content/sql-server-linux</link>
  <description>  &lt;div data-history-node-id="1339424" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/database-152091_640.png" width="544" height="600" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/john-s-tonello" lang="" about="https://www.linuxjournal.com/users/john-s-tonello" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;John S. Tonello&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
When Wim Coekaerts, Microsoft's vice president for open source, took the
stage at LinuxCon 2016 in Toronto last summer, he came not as an adversary, but
as a longtime Linux enthusiast promising to bring the power of Linux to Microsoft
and vice versa. With the recent launch of SQL Server for Linux, Coekaerts is
clearly having an impact.
&lt;/p&gt;

&lt;p&gt;
&lt;a href="https://github.com/PowerShell/PowerShell"&gt;PowerShell&lt;/a&gt; for Linux and
&lt;a href="https://msdn.microsoft.com/en-us/commandline/wsl/install_guide"&gt;bash for
Windows&lt;/a&gt; heralded the beginning, but the arrival
of SQL Server, one of the most popular relational databases out there, offers
Linux shops some real opportunities—and a few conundrums.
&lt;/p&gt;

&lt;p&gt;
Clearly, the opportunity to deploy SQL Server on something other than a Windows
Server means you can take advantage of the database's capabilities without
having to manage Windows hosts to do it. If you're a mostly-Linux shop (or
want to be) and you have customers looking to deploy workloads and applications
that require SQL Server, you now have a true Linux solution.
&lt;/p&gt;

&lt;p&gt;
At the same time, if you're big enough to have cabinets full of database
server hardware, you probably have databases that serve real-time workloads and
databases that underpin your data warehouse. If you're running the latter on
beefy hardware necessary to manage the overhead of both Window Server and SQL
Server, the advent of SQL Server on Linux might give you an alternative.
&lt;/p&gt;

&lt;p&gt;
For instance, you might shift your lower-resource data warehouses to
resource-sipping Linux servers with SQL Server. That could save you on hardware
and migration costs, given that there are no structural differences between SQL
Server running on Windows or Linux.
&lt;/p&gt;

&lt;p&gt;
If you were contemplating shifting your data warehouse from SQL Server to MariaDB
or Oracle to take advantage of Linux hardware savings, you wouldn't have to
fret about the conversion costs. Even though you'd still pay for SQL Server
licenses, you could save on the costs to convert and migrate to make up the
difference.
&lt;/p&gt;

&lt;p&gt;
On the conundrum side, you may ask why you might need Microsoft's offering at
all. Afterall, open-source databases like &lt;a href="http://www.mariadb.org"&gt;MariaDB&lt;/a&gt; (or &lt;a href="http://www.mysql.org"&gt;MySQL&lt;/a&gt;) and &lt;a href="https://www.postgresql.org"&gt;PostgreSQL&lt;/a&gt; are
robust, well tested, free and supported by large communities. Why introduce a
historically closed-source proprietary tool to your open-source environment? SQL
Server 2016 Standard &lt;a href="https://www.microsoft.com/en-us/sql-server/sql-server-2016-pricing"&gt;lists
for about $3,717 per core&lt;/a&gt;, though the Developer and
Express versions are free, with Express able to handle up to 10GB for your
data-driven applications.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/sql-server-linux" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 15 Jun 2017 20:54:12 +0000</pubDate>
    <dc:creator>John S. Tonello</dc:creator>
    <guid isPermaLink="false">1339424 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
