<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:foaf="http://xmlns.com/foaf/0.1/" xmlns:og="http://ogp.me/ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:schema="http://schema.org/" xmlns:sioc="http://rdfs.org/sioc/ns#" xmlns:sioct="http://rdfs.org/sioc/types#" xmlns:skos="http://www.w3.org/2004/02/skos/core#" xmlns:xsd="http://www.w3.org/2001/XMLSchema#" version="2.0" xml:base="https://www.linuxjournal.com/tag/mysql">
  <channel>
    <title>MySQL</title>
    <link>https://www.linuxjournal.com/tag/mysql</link>
    <description/>
    <language>en</language>
    
    <item>
  <title>nginx and WordPress</title>
  <link>https://www.linuxjournal.com/content/nginx-and-wordpress</link>
  <description>  &lt;div data-history-node-id="1339208" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/12112778334_5b731027e7_z_0.jpg" width="608" height="423" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/reuven-m-lerner" lang="" about="https://www.linuxjournal.com/users/reuven-m-lerner" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Reuven M. Lerner&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
In &lt;a href="http://www.linuxjournal.com/content/nginx"&gt;my last article&lt;/a&gt;, I took an initial look at nginx, the high-performance
open-source HTTP that uses a single process and a single thread to
service a large number of requests. nginx was designed for speed and
scalability, as opposed to Apache, which was designed to maximize
flexibility and configuration. But through the years, nginx has become
increasingly flexible as well, with a growing number of plugins and
modules that can be used to customize its configuration. Between the
performance, increasingly good documentation and convenience, it's no
wonder nginx has been increasingly popular.
&lt;/p&gt;

&lt;p&gt;
It's also no surprise that WordPress, the open-source blogging and CMS
platform, has become hugely popular. I've heard people say that 10% of
websites are now run using WordPress. Even if that's not precisely
true, there's no doubt that a huge number of sites are powered by
WordPress. I'm a mostly satisfied WordPress user, having converted my
main site and my two ebook sites to it in the past year after years
of using it to power my blog.
&lt;/p&gt;

&lt;p&gt;
So, I thought it would be interesting to demonstrate how easy it
is to set up WordPress with nginx, given the popularity of each of
these systems alone as well as together. In my last article, I described how you
can set up a plain-vanilla PHP system with nginx; WordPress is a bit
more complex, but less than you might think. Starting with a
bare-bones Linux installation, let's walk through the configuration
needed to get WordPress up and running.
&lt;/p&gt;

&lt;h3&gt;
The Basics&lt;/h3&gt;

&lt;p&gt;
In order to install WordPress and nginx together, you're going
to need three basic software systems installed: WordPress, nginx and
MySQL. The first two are pretty obvious, given this article's goal;
the third is a byproduct of using WordPress, which works exclusively
with MySQL.
&lt;/p&gt;

&lt;p&gt;
So, on my Ubuntu Linux machine, I would run the following:

&lt;/p&gt;&lt;pre&gt;
&lt;code&gt;
$ sudo apt-get install mysql-server mysql-client nginx-core
 ↪php5-cli php5-fpm php5-mysql
&lt;/code&gt;
&lt;/pre&gt;


&lt;p&gt;
This installs a very large number of packages, but it will give you
the core of what you need to get your system up and running. Notice
that you're not installing WordPress here, so that you can install it
manually, using the source code. Indeed, installing WordPress via
&lt;code&gt;apt-get&lt;/code&gt; also means installing Apache; although it's certainly possible to
undo this choice, the benefits of installing WordPress on your own
outweigh those of doing it via a package manager.
&lt;/p&gt;

&lt;p&gt;
You will, as part of this installation, need to choose a password for
your MySQL root user. This is an important part of security on your
system, so do try to use a strong password.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/nginx-and-wordpress" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Tue, 08 Nov 2016 12:13:35 +0000</pubDate>
    <dc:creator>Reuven M. Lerner</dc:creator>
    <guid isPermaLink="false">1339208 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Tune Up Your Databases!</title>
  <link>https://www.linuxjournal.com/content/tune-your-databases</link>
  <description>  &lt;div data-history-node-id="1338980" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/11981mysqlf1.jpg" width="550" height="310" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
My last full-time job was manager of a university's database
department. Ironically, I know very, very little about databases
themselves. I'm no longer in charge of college databases, but I still
do have a handful of MySQL servers that run my various Web applications. Apart
from &lt;code&gt;apt-get install&lt;/code&gt;, I have no idea how to make databases work.
Thankfully, help is available.
&lt;/p&gt;

&lt;img src="http://www.linuxjournal.com/files/linuxjournal.com/ufiles/imagecache/large-550px-centered/u1002061/11981mysqlf1.jpg" alt="" title="" class="imagecache-large-550px-centered" /&gt;&lt;p&gt;
MySQLTuner is a Perl script that checks your local (or remote) MySQL server
and gives recommendations for improving security and performance. It does
not edit files or actually make changes to the server, but it does give a
very lengthy list of recommendations. If you (like me) are the sort of
person who just tends to copy/paste database setup instructions, running
MySQLTuner is a really good idea.
&lt;/p&gt;

&lt;p&gt;
You can download your copy at &lt;a href="http://mysqltuner.com"&gt;http://mysqltuner.com&lt;/a&gt;. Be sure to read the
documentation to get the most use out of the program. And, if you discover
security problems like the ones shown in my screenshot? Fix them!
&lt;/p&gt;

&lt;p&gt;
Thanks to its ability to help improve and secure MySQL servers that 
otherwise might be vulnerable, MySQLTuner gets this month's Editors' Choice
award. If you're imperfect like me, download a copy today and fine-tune
your databases!
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/tune-your-databases" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Thu, 03 Mar 2016 18:54:57 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1338980 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>MySQL—Some Handy Know-How</title>
  <link>https://www.linuxjournal.com/content/mysql%E2%80%94some-handy-know-how</link>
  <description>  &lt;div data-history-node-id="1338920" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/11878f1.jpg" width="512" height="369" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
I recently was talking to someone over IRC who was helping me with a PHP
app that was giving me trouble. The extremely helpful individual asked
me to let him know the value of a certain field in a record on my MySQL
server. I embarrassingly admitted that I'd have to install something
like PHPMyAdmin or Adminer in order to find that information. He was very
gracious and sent me a simple one-liner I could run on the command line
to get the information he needed. I was very thankful, but admittedly
embarrassed. I figured if I don't know how to get simple information from
a MySQL server, there probably are others in the same boat. So,
let's learn a little SQL together.
&lt;/p&gt;

&lt;h3&gt;
Get a Database&lt;/h3&gt;

&lt;p&gt;
It turns out there are quite a few sample databases to download from the
Internet. Unfortunately, they're all far more complicated than I'd like to
use for demonstration purposes. So, I created a database. Although you don't
have to have my database in order to follow along, it certainly will help
if you do. So first, let's create a database and import my data.
&lt;/p&gt;

&lt;p&gt;
The first thing you need to do is install MySQL. Depending on your
distribution, this either will be an &lt;code&gt;apt-get&lt;/code&gt;
command, a &lt;code&gt;yum&lt;/code&gt; command,
or a search in the GUI software center. I'll leave the installation
to you—feel free to use Google if you're struggling. The main
thing is to remember the root password you set during the installation
process. This isn't the same as the root password for your system; rather
it's the root user in your MySQL server. If you're using a live server,
just create a new user/password with access to create databases. I'm going to
assume you've just installed MySQL, and you know the root user's password.
&lt;/p&gt;

&lt;p&gt;
When you work with MySQL on the command line, you use the
"mysql"
application. So in order to create the database for this example,
type:

&lt;/p&gt;&lt;pre&gt;&lt;code&gt;
mysql -u root -p -e "CREATE DATABASE food"
&lt;/code&gt;&lt;/pre&gt;


&lt;p&gt;
You should be prompted for a password, which is the password you set
during installation for the MySQL root user account. If you get an error
about the database already existing, you can choose a new name for your
database. Just realize that the name you pick will be what you'll
use later when I refer to the "food" database.
&lt;/p&gt;

&lt;p&gt;
Next, you need to get my data into your database. I have an SQL file stored
at &lt;a href="http://snar.co/foodsql"&gt;http://snar.co/foodsql&lt;/a&gt;. You can download that file, or
use &lt;code&gt;wget&lt;/code&gt;
on the command line to get it. If you use &lt;code&gt;wget&lt;/code&gt;, the resulting filename
might be "foodsql" or "food.sql", depending on
how your version of &lt;code&gt;wget&lt;/code&gt;
works. Either filename will work, just make note of what you have so you
can change the command you're going to use below. To download and import
the data, type:

&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/mysql%E2%80%94some-handy-know-how" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Wed, 30 Dec 2015 19:29:56 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1338920 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Using MySQL for Load Balancing and Job Control under Slurm</title>
  <link>https://www.linuxjournal.com/content/using-mysql-load-balancing-and-job-control-under-slurm</link>
  <description>  &lt;div data-history-node-id="1338863" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/slurm_logo.png" width="436" height="400" alt="" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/steven-buczkowski" lang="" about="https://www.linuxjournal.com/users/steven-buczkowski" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Steven Buczkowski&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;
Like most things these days, modern atmospheric science is all about
big data. Whether it's an instrument flying in an aircraft taking
sets of images several times a second and producing three quarters
of a terabyte of data per flight day over a two-week campaign or a
satellite instrument producing hundreds of gigs of spectral data
daily over a 10–15 year lifetime, data volume is enormous. Simply
analyzing a day's worth of data to keep track of basic instrument
stability is CPU-intensive. Fully processing a day to retrieve the
state of the atmosphere or looking at trends across a decade's worth
of data is exponentially so. 
&lt;/p&gt;
&lt;p&gt;
High-performance parallel cluster
computing is the name of the game. For years I've done this on a
very basic level by kicking off a handful of copies of my processing
scripts on a couple computers around the lab, but after a recent
move into a new lab, I got my first chance to work on a real cluster
system, processing data from a satellite-borne hyperspectral sounder
called AIRS (see Resources). AIRS is one of the instruments onboard NASA's AQUA
satellite that was launched in late 2002 and has been in continuous
operation since. Data from AIRS and similar instruments is used to
map out vertical profiles of atmospheric temperature and
trace gases globally, but we have to be able to process it first.
&lt;/p&gt;

&lt;p&gt;
The cluster computing game here is strictly to get a whole lot of
computers doing the same thing to a whole lot of data so that we can
process it faster than we collect it (much faster would be
preferable). Since I was new to this game just a few months ago,
I've had much to learn about cluster computing and how to design
algorithms and processing software to take advantage of multiple CPUs for
processing. This was my first experience where I had
hundreds of CPUs at my disposal, and it really has changed how I
process data in general. I started this article to describe how I was
shown to parallelize this type of data processing and a method I put
together that makes the process much cleaner. 
&lt;/p&gt;

&lt;h3&gt;
Basic Slurm&lt;/h3&gt;

&lt;p&gt;
The cluster system here consists of 240 compute nodes, each with
dual, 8-core processors and 64GB of main memory running Red Hat
Enterprise Linux. Cluster jobs are scheduled to run through the
Slurm workload manager (see Resources). In a nutshell, Slurm is a suite of programs
that works to allocate computer resources among users and compute
jobs and enforce sharing rules to make sure everyone gets a chance
to get their work in. The two most important programs in the suite
for actually working on the system are &lt;code&gt;sbatch&lt;/code&gt; and
&lt;code&gt;srun&lt;/code&gt;.
&lt;/p&gt;

&lt;p&gt;
&lt;code&gt;sbatch&lt;/code&gt; is the entry point to the Slurm scheduler and reads a
high-level Bash control script that specifies job parameters (number
of nodes needed, memory per process, expected run times and so on) and
spawns the requested number of identical jobs via calls to srun.
&lt;/p&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/using-mysql-load-balancing-and-job-control-under-slurm" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 19 Oct 2015 18:39:35 +0000</pubDate>
    <dc:creator>Steven Buczkowski</dc:creator>
    <guid isPermaLink="false">1338863 at https://www.linuxjournal.com</guid>
    </item>
<item>
  <title>Moving Databases</title>
  <link>https://www.linuxjournal.com/content/moving-databases</link>
  <description>  &lt;div data-history-node-id="1023471" class="layout layout--onecol"&gt;
    &lt;div class="layout__region layout__region--content"&gt;
      
            &lt;div class="field field--name-field-node-image field--type-image field--label-hidden field--item"&gt;  &lt;img src="https://www.linuxjournal.com/sites/default/files/nodeimage/story/Database_icon_simple.png" width="128" height="128" alt="Database" title="This looks like a pile of cheesecake. I would really like some cheesecake. :)" typeof="foaf:Image" class="img-responsive" /&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-node-author field--type-ds field--label-hidden field--item"&gt;by &lt;a title="View user profile." href="https://www.linuxjournal.com/users/shawn-powers" lang="" about="https://www.linuxjournal.com/users/shawn-powers" typeof="schema:Person" property="schema:name" datatype="" xml:lang=""&gt;Shawn Powers&lt;/a&gt;&lt;/div&gt;
      
            &lt;div class="field field--name-body field--type-text-with-summary field--label-hidden field--item"&gt;&lt;p&gt;I recently moved my personal website from GoDaddy to my home server. I have a business connection at my house, and my site gets little enough traffic that hosting at home on my static IP makes sense. Moving the files wasn't really difficult, I FTP'd them down from the old server, and SFTP'd them up to the new server. Moving the database was a bit more challenging, however.&lt;/p&gt;
&lt;p&gt;If you have shell access, it's a pretty simple process. On the old server, type:&lt;/p&gt;
&lt;pre&gt;mysqldump -u username -p databasename &gt; databasebackup.sql
&lt;/pre&gt;&lt;p&gt;You'll be asked for the password assigned to "username", and then mysqldump will create a file that contains all the information needed to restore your database. One thing to note, however, is that going between different versions of mysql can be problematic. That's where the --compatible flag is handy. You can specify what type of database software you'll be importing to, and mysqldump will (try) to give you a compatible file. Some options are mysql323, postgresql, mysql40, etc.  Check the man page for more options and explanations about what they all do.&lt;/p&gt;
&lt;p&gt;To restore your database file on the new server, it's just as easy.  Simply type:&lt;/p&gt;
&lt;pre&gt;mysql -u username -p newdatabasename &lt; databasebackup.sql
&lt;/pre&gt;&lt;p&gt;That should transfer your data simply and easily. If you get errors, you might have to check that --compatible flag, or even do some more work to your database in order to make it compatible.  One of the frustrating things with GoDaddy, however, is that you don't get shell access to your hosting account. Since my account was disabled, any mysql tools that might be available via their website were also unavailable.  That's why it's important to have some backup software running regularly on your website. I was able to take an automated backup from a week ago, and simply import it into my new server.&lt;/p&gt;
&lt;p&gt;The moral of the story, like most, is that backups are VERY important!  It's great to know the tools to make a dump of your mysql database, but if something is corrupt, you'll want a backup rather than a fresh dump. If you have any other tips for moving databases from one server to another, feel free to leave them in the comments.&lt;/p&gt;
&lt;/div&gt;
      
            &lt;div class="field field--name-node-link field--type-ds field--label-hidden field--item"&gt;  &lt;a href="https://www.linuxjournal.com/content/moving-databases" hreflang="und"&gt;Go to Full Article&lt;/a&gt;
&lt;/div&gt;
      
    &lt;/div&gt;
  &lt;/div&gt;

</description>
  <pubDate>Mon, 15 Aug 2011 13:00:00 +0000</pubDate>
    <dc:creator>Shawn Powers</dc:creator>
    <guid isPermaLink="false">1023471 at https://www.linuxjournal.com</guid>
    </item>

  </channel>
</rss>
