About Lynn Ferrante

Lynn Ferrante has worked with databases in the enterprise for her whole career at MySQL, Oracle, Sybase, and Ingres. She also worked on an open source project called GenMapp (University of California, San Francisco), and contributed to the development of many database applications in the fields of energy and environment. She is currently s Sales Consultant at MySQL

How to Back Up Selected MySQL Databases

I recently had to do a backup of a 30 very large databases within a MySQL instance that had a total of 60+ databases.  I did NOT want to back up the other databases. In fact, the other databases had very large tables in them and I had very little disk space, so I could not afford to back them up. In this post I will share what I learned.

I’ve simplified the problem to illustrate the idea.  My goal in this post is to backup one database when an instance has many, with mixed InnoDB and MyIsam tables.

  • I have 3 databases called dbtest1, dbtest2, and dbtest3, in one instance of MySQL .  I only want to back up  the tables in database dbtest2

3dbs

  • Each database has one InnoDB and one MyISAM table
  • The InnoDB tables are quite large, and I have limited disk space for the backup
  • I am using file-per-table option for all InnoDB tables

The –databases Option

First I tried the–databases=dbname  option in the mysqlbackup command line.  This option filters  all non-innoDB files, including FRM, MYD, MYI etc. .

mysqlbackup –u username –p –databases=”dbtest2–backup_dir=yourdirectory backup

This translates to:

  • Backup all the non-InnoDB files (including frm files) in the database I’ve specified (dbtest2)
  • Do NOT backup the non-InnoDB files in any other databases
  • Back up all IBD files in the instance

results1A full backup of dbtest2  was saved (as outlined above) but  the IBD files  in all other databases in the instance were also backed up.  So this is not the exact option I needed.

The –include  Option

Next I added  the –include option to filter IBD files in other databases.  It’s important to think about the—include option as filtering IBD files (not InnoDB tables, which in my mind include IBD and FRM files)

mysqlbackup –u username –p –databases=”dbtest2” –include=”dbtest2  backup_dir=yourdirectory  backup

This translates to:

  • Backup all the non-InnoDB files (including frm files) in the database I’ve specified (dbtest2), as before
  • Do NOT backup the non-InnoDB files in any other databases, as before
  • Back up all IBD files only in the dbtest2 database

.results2

This gave me exactly what I wanted – one directory with all the IBD, FRM, MYD, and MYI files for dbtest2.  All other databases are completely ignored, so I did not run out of disk!.

Note that the –include syntax does not permit you to list more than one database at a time.  However, it does allow you to use regular expression syntax such as :

–include=db.                        (include all databases starting with db followed by any character)

  • or

–include=dbtest[12]             (include dbtest1 and dbtest2)

Compression

Consider trying  compression – in my original scenario, I also used compression which was a great help and compressed my 1300 gigabyte files 80% in the backup.  Of course the downside to compression is you must decompress to restore, but it is definitely worth considering if you have space limitations.

A Visual Guide to the MySQL Performance Schema

If you haven’t explored the MySQL Performance Schema yet, this is a good place to start.  This is Performance Schema 101, a basic introduction to the MySQL 5.6 performance_schema, which records runtime statistics from the MySQL database. The performance_schema is intended to provide access to useful information about server execution while having minimal impact on server performance.  Performance_schema is the name of both the storage engine and the database itself, and it was initially implemented  in MySQL 5.5. In MySQL 5.6 the engineers added quite a bit of new instrumentation.

The performance_schema database uses views or temporary tables that actually use little to no persistent disk storage .Memory allocation is all done at server startup, so there is no ongoing memory reallocation or sizing, which is great for performance.

I categorize the performance_schema tables into four areas:

Snap8Types of Tables in the Performance Schema

The Setup tables keep track of the logistics – who, what, and how  to monitor, and where to save this data. Its important to understand what is in these tables so you know who and what is being reported on and what can be changed.

Current Events tables contain the most recently collected monitored event.

History tables are similar to Current Events tables, but contain a  history of what has occurred over time.

Object Instances tables record what instances of objects (files, locks, mutexes, etc) are instrumented.

In this post we will take a closer look at the setup tables.  Most tables in the performance_schema are read only, but some of the setup tables allow data manipulation language (DML)  so that  configuration can be changed. Remember, however, that all tables are cleared when the server is shut down.  There are also options you can set in the .cnf file for some setup values, discussed below. There are five setup tables:

ps20

Setup tables  (Note all tables are prefixed with setup_ )

Who to Monitor  setup_actorsThe setup_actors table tells MySQL who to monitor.  It contains only three columns, HOST, USER, and ROLE (the latter is not currently used).  The performance schema looks in the setup_actors table to see if a connecting thread should be monitored.  Initially this table is set for monitoring of all foreground threads (all fields are set to %).

ps3You can insert, update  and delete rows, for example if you want to monitor only a specific user, or if you want to exclude a set of users like batch loading jobs.  Change the HOST and USER fields to indicate this.  Any changes to this table apply only to new connections.

ps21

What tables to monitor   

The setup_objects table contains the tables you are ps24monitoring.  By default, all tables are monitored except those in the mysql, performance_schema, and information _schema databases.

ps32    

If you have a table that you don’t want to monitor, for example a logging table, you can exclude it through use of this table. The default size for this table is 100 rows, but the size can be increased through the performance_schema_setup_objects_size variable.

How to modify:  Insert or update this table.

What instrumentation to collect      

The setup_instruments table indicates what  can be monitored by listing the setup_instruments“instruments” currently available in the MySQL Server. Think of the MySQL Server as being  or equipped with pieces of code that perform specified functions or measurement, called “instruments”.  Each of these coded  instruments can collect certain specific information.  Each instrument has a NAME that begins with one of four categories:  statement, stage, idle,  or wait. Instruments use a naming convention (linear taxonomy) that goes from general on the left to specific on the right, for example:

statement/sql/create_table
wait/io/file/myisam/log
  • Statement indicates a sql statement (like SELECT) or command (like CONNECT)
  • Stages is the stage of statement processing, the , like SORTING RESULT
  • Wait is an instrumented wait event, like WAIT I/O/FILE or WAIT/LOCK
  • Idle is when a connection is idle, that is,  a socket is idle if it is waiting for a request from the client

There are over 500 instruments listed in this table in 5.6. You will not normally need to collect on all the instrumentation, so each instrument can be enabled or disabled. Out of the box, less than half of these instruments are enabled (ENABLED = ‘YES’) . If an instrument is not enabled, no information is collected. When and if you need to investigate in more detail, the other instruments can be set as needed by updating the ENABLED column.  The TIMED column indicates whether that instrument is timed or not.  If it is, the unit of measurement is stored in the setup_timers table

ps7You can also control instruments at server startup with the option –performance_schema_instrument = ‘instrument_name=value’

To control an instrument at server startup, use 
an option of this form: --performance_schema_instrument='instrument_name=value'

Where To Store It

The setup_consumers table indicates where the monitored data will go.  The setup_consumerssetup_consumers table has only two columns – NAME and ENABLED.

ps1  The global_instrumentation consumer is the highest level and must be enabled for any monitoring to be collected in tables. If the global_instrumentation column is enabled, then instrumentation for global states is kept, and the next level, thread_instrumentation is checked.  If thread_instrumentation is  enabled, the Performance Schema maintains thread-specific information. Most consumers NAMES are  tables (the events%  listed above) where data will be written.  Note that several of the tables are history tables, which could be disabled if you are not interested in keeping history.

We also have digest tables.  The statements_digest contains a normalized view of SQL statements, taking out specific values, white space, and comments from similar SQL statements and grouping them together. You can then see what groups of statements are executing and how frequently they run.  If the statements_digest is enabled, this information gets written to the  events_statements_summary_by_digest table.

How to modify:  Update or use performance_schema_consumer_name=value

At the MySQL Virtual Developers Day , Mark Leith  gave a great talk explaining the MySQL performance schema.  This talk is now available online:

https://oracle.6connex.com/portal/mysql/login/

Mark also has an excellent blog:

http://www.markleith.co.uk/2012/07/13/monitoring-processes-with-performance-schema-in-mysql-5-6/

Marc Alff’s blog post :  On configuring the Performance Schema http://marcalff.blogspot.com/2013/04/on-configuring-performance-schema.html?m=1

We have excellent documentation available on the performance schema:

http://dev.mysql.com/doc/refman/5.5/en/performance-schema.html

I hope you enjoy exploring the performance_schema.

MySQL and Oracle: Playing in the Same Sandbox

Did you know that approximately 70% of Oracle customers are also using MySQL?  The use cases for each database are often different – and sometime overlap.  But the needs of a database are the same – among them are security, integration with other products, and strong monitoring.  One of the advantages of the MySQL / Oracle relationship is that Oracle is integrating MySQL with many of its other software products such as:

  • Goldengate, for real time heterogeneous replication from/to MySQL and other databases.  Goldengate real life use cases include real time replication of selected MySQL data collected online to a data warehouse in Oracle, Teradata, Neteeza, etc; query offloading from a transactional system built on Oracle, DB2 Z series, SQL Server, etc to a MySQL query instance; and real time reporting by real time replication of a subset of data from corporate applications into a dedicated MySQL data mart.  Goldengate for MySQL is available now.
  • Database Firewall – Are you worried about SQL Injection?  Database firewall acts as your first line of defense by examining all incoming SQL transactions and using a SQL grammar based technology that can categorize millions of SQL statements into a small number of SQL characteristics.  Initially, you use the software to monitor incoming transactions.  It learns about normal activity in your system.  When you are ready to go into defensive mode, Database Firewall uses this SQL whitelist to create policies so you can block, log and notify on any abnormal SQL.  This is available with MySQL now, and is also available for other databases including Oracle, SQL Server, IBM DB2 and Sybase.
  • Audit Vault will consolidate the audit stream from audit data into a secure repository with built in reporting for auditors and security personnel.  This certification will be phased in with the release of the MySQL audit API and subsequent integration with Audit Vault.
  • Oracle Secure Backup – This product provides encrypted tape backups, vault rotation, and policy driven media management and is integrated with MySQL Enterprise Backup. Secure Backup for MySQL is available now.

These are just a few of the product integrations.  At the Southern California Linux Expo (SCaLE), I had the opportunity to present this topic on MySQL Friday.   You can find my slides at Slideshare:

Playing in the Same Sandbox:  MySQL and Oracle

The Oracle product certifications and integrations will allow you to use a common set of tools for Oracle and MySQL databases, and provide MySQL additional security and cost effective use. You’lll find more information on each of these products here:
Database Firewall

Goldengate

Audit Vault

Secure Backup

Aloha – MySQL Dives into the Thread Pool

By now you have probably heard about the MySQL thread pool plugin and API, but you may not have fully processed the details. Here’s the quick summary:  With the new thread pool plugin, there is now an alternative way to handle connection threads in MySQL Enterprise Edition.  With the plugin, MySQL connection threads are shared like an extraordinarily well managed timeshare in Hawaii.  When one connection is “idle”, asking nothing of and expecting nothing from the database, another connection can use that same thread for its database requests.  Threads are released by each connection as soon as the request is completed and  go back into the pool for re-use – just like the theoretical timeshare is up for grabs on the weeks you are not there.

In the older, and still default connection thread model, threads are dedicated to a single client  for the life of the connection and there are as many threads as there are clients currently connected to the database.  This has some disadvantages when the server workload must scale to handle large numbers of connections, and the overhead can be signficant. This occurs for several reasons:

  • Lots of threads use lots of memory and can make the CPU cache ineffective
  • Too many active threads trying to execute in parallel may cause a high level of resource contention and be inappropriate for the amount of parallelism available

The new thread pool plugin offers an alternative thread pool implementation, and focuses on limiting the number of concurrent, short running statements to mazimize performance and reduce overhead.  By limiting the number of concurrent, short running statements and sharing threads, we can control the number of active threads at any one time.  Thread management has been revamped and by managing these threads in a highly efficient manner, we end up reducing overhead and maintaining performance levels as the number of users increase.

Here are the mechanics:  In the new plugin, threads are organized into groups (16 by default but configurable up to 64 on server startup).  Each group starts with one thread and can increase to a maximum of 4096 threads.  Additional threads are created only when necessary.  Each incoming connection request is assigned to a group by round robin. Each group has one listener thread that listens for incoming statement requests.

When a statement request comes in, it is executed immediately by the group’s listener thread if it is not busy and there are no other statement requests waiting.  If the statement request finishes quickly, the listener thread then efficiently returns to listening and is available to execute the next incoming request, preventing the need for a new thread to be created.   If the request does not finish quickly, it runs to completion but another thread is  created as the new listener.

If the listener thread is busy, the request is queued.  There will be a very brief time (configurable with the thread_pool_stall_limit system variable which defaults to 60 ms) while we wait to see if the currently executing statement will finish quickly or not. If it finishes quickly (under thread_pool_stall_limit), we can re-use this thread for the next request in the queue, eliminating the overhead of creating a new thread or having too many short statement trying to execute in parallel .

You can see how this thread pool design strives to have one thread executing per group at any time . The number of groups (thread_pool_size_variable) is very important, because it approximates the number of short running statements that will be executing concurrently at any one time.  Long running statements are prevented from causing other statements to wait, since if they go beyond the thread_pool_stall_limit, another thread will be started and the next request in the queue will execute on it.

Your predominant storage engine will help determine the number of groups you should have.  For InnoDB, between 16 and 36 groups seems to work well in many cases, but for MyISAM set it much lower (4-8).

There are two queues for waiting statements, low and high priority.  The low priority queue contains:

  • all statements for non-transactional storage engines
  • all statements if autocommit is enabled
  • the first statement in  an InnoDB transaction

These statements do not languish in the low priority queue forever since they will get kicked over to the high priority queue when the thread_pool_kickup_timer times them out. However, there is a maximum number of statements that can be moved per time period to keep things under control.

The high priority queue contains

  • any subsequent statements in InnoDB transactions, and
  • any statements kicked up from the low priority queue.

You can find the thread pool plugin and other commercial extensions in MySQL 5.5.16 and above, available on http://support.oracle.com and the Oracle Software Delivery Cloud https://edelivery.oracle.com .  This release contains a plugin library object file which must be placed in the appropriate directory.  The server must then be started with the –plugin-load option. Documentation  and complete install directions for the plugin can be found at http://dev.mysql.com/doc/refman/5.5/en/thread-pool-plugin.html.   There is also a thread pool API available in the Community Edition.

Happy swimming!

Lynn Ferrante has worked with databases in the enterprise for her whole career at MySQL, Oracle, Sybase, and Ingres.  She also worked on an open source project called GenMapp (University of California, San Francisco), and contributed to the development of many database applications in the fields of energy and environment

Monitoring Your MySQL Backup

In California we are always thinking about backups.   Living near an earthquake fault line makes this necessary.  For me, it is the Hayward Fault (it runs from goal post to goal post  in University of Californa Berkeley stadium).  We are strongly advised to have backup systems for water, food, and medical emergencies.  It’s necessary to monitor your food and water emergency supplies so if the worst happens, you don’t have spoiled food or water as your backup (who knew water expires?) .   Plan for the worst, hope for the best, but keep an eye on those supplies and replenish them when necessary!  And most of all, make sure your good intentions end up as actual physical supplies in the garage!

Backups are also an incredibly critical part of the enterprise  environment.  It’s all about being able to successfully restore your database when needed.  There are many ways to backup your database, but are you monitoring your MySQL backups to make sure they are going to be there when you need them?  In this post, we’ll cover the new features in MySQL Enterprise Monitor (MEM) 2.3.5  that help monitor backups.

In the last post  I covered some of the new features in MySQL Enterprise Backup (MEB) that allow you to write single file backups, stream these backups  to  remote servers or other devices, stream to media management software like Oracle Secure Backup, and take advantage of tape encryption capabilities (Steps 1-4 from the previous post ) . This is the final post  in this series which describes the new Backup Advisor in MEM and the underlying mysql tables (Step 5). 

MySQL Enterprise Monitor (MEM) is the monitoring software supplied with the Enterprise Edition subscription of MySQL  and if you are interested in trying it, you can download it  for a trial at https:edelivery.oracle.com.  I find MEM very helpful because it provides  proactive monitoring for your MySQL databases.  This allows you to increase your productivity because you automate monitoring and help speed up diagnosis of potential issues. 

MEB is a backup tool included with an enterprise subscription from Oracle/MySQL, also available for trial from http://edelivery.oracle.com.  MEB was previously known as InnoDB Hot Backup, and provides hot, non-blocking backups for InnoDB tables, and “warm” backups for MyISAM tables. 

Monitor Your Backup
MEM 2.3.5  (and above) has a new Backup Advisor that can be used to monitor backups (note, MEM 2.3.6 was released in early September).  The Backup Advisor alerts you to backup success or failure, excessive backup lock time, backups that are too old, and failure to use incremental backups.  Here are some screen shots.  The first is  from the Details and Advanced Tabs for the “Backup Succeeded” rule  in MEM:

This rule will let you know how long the backup took to complete and how long locks were held. 

Full backups that are older than a threshold number of days (default is 7) are reported since out of date backups will only cause delays or problems if you ever need to restore from them:

You will also see if any excessive lock time in your backup.  :


Backups are always a balancing act between performance, storage space, and restoral time.  Incremental backups only backup the data that has changed since the last backup, save on storage space, and are faster than a full backup.  I encourage the use of incremental backups in your backup strategy.  MEM will notify you if  incremental backups are not enabled:

 

You can customize any of the thresholds to suit your environment.

Tables Behind the Curtain

MEM uses the backup progress information written into the mysql.backup_progress table, and status information from the mysql.backup_history table.  You can query these tables to get backup status information if you are not using MEM (but you will not receive the alerts and notifications that MEM provides).

Here I’ve queried from the backup_history table, which keeps a history of the backups I’ve completed:

The backup_progress table shows the state of the backup as it progresses from start to finish:

MySQL Enterprise Backup and Enterprise Monitor new features bring us one step closer to a true enterprise backup environment. Streaming, integration with media management systems, and the ability to take advantage of tape encryption features coupled with the new Backup Advisor in MEM will help achieve that state we all need to plan for but hope to never see – the ability to quickly restore a database when needed.

5 Steps to an Enterprise Backup

I’d like to focus this blog on using MySQL in the Enterprise and kickoff with a series of posts on “Enterprise Backup” building on the new features in both MySQL Enterprise Backup (MEB) and MySQL Enterprise Monitor (MEM).  The new features in MEB 3.6 provide the capabilities to stream backups directly to another server, interface with backup media management software, and take advantage of tape encryption.  MEM 2.3.5 now has a Backup Advisor that helps monitor your backups.  In this and a subsequent  blog post, I’ll go through a progression of backups building up on a fairly straightforward vanilla single file backup with MEB as follows:

  • Backing up to a Single File
  • Add streaming to your Single File backup
  • Stream your Single File Backup to a Media Management System
  • Encrypt your Backup Tapes using your Media Management System
  • Monitoring your Backup with MySQL Enterprise Monitor or queries

MEB is a backup tool included with an enterprise subscription from Oracle/MySQL.  If you are interested in a trying it out, it can be downloaded as a trial from http://edelivery.oracle.com.  MEB was previously known as InnoDB Hot Backup, and provides hot, non-blocking backups for InnoDB tables, and “warm” backups for MyISAM tables.  MEM is also included in the enterprise subscription from Oracle/MySQL and can  be downloaded for a trial at the same site.  It provides proactive monitoring for your MySQL databases.

This post includes the MEB related single file backup, streaming the single file, interfacing with a media management system, and encrypting tapes.  My next post will cover the MEM Backup Advisor and the tables behind the scene.

The following abbreviations are used in this blog:

MEB MySQL Enterprise Backup

MEM MySQL Enterprise Monitor

MMS Media Management System (software for managing tapes and backups)

OSB Oracle Secure Backup

SBT Secure Backup to Tape Interface

Step 1: Backup to a Single File

You can now backup into a single file, which simplifies moving backup data around, especially if you have a large number of files to keep track of.  You do this by using the backup-to-image option of MySQL Enterprise Backup:

mysqlbackup
–backup-image= hr.mbi
–backup-dir=/backup-tmp
–user lynn
–password
backup-to-image

This places my single file backup in the file hr.mbi.  Since some small work files are still used, you need to indicate where these should go with the –backup-dir option.  If you are interested in these files, see the documentation in the “Files that Are Backed Up” section of the MySQL Enterprise Backup User’s Guide http://dev.mysql.com/doc/mysql-enterprise-backup/3.5/en/meb-files-overview.html .  There are also copies of these files inside your successful single file backup, so you are not required to keep them.  For example, backup_variables.txt contains the start and end log sequence numbers plus information indicating whether this was a compressed, partial, or incremental backup.  The file backup_create.xml lists the command line arguments and the environment that the backup was created in.

You can also convert an existing backup directory to a single file backup as follows:

mysqlbackup
– backup-image=/backup/my.mbi
–backup-dir=/var/mysql/backup
– user lynn
– password
backup-dir-to-image

Use list-image to list the contents of a single file backup (leaving out user and password for simplicity):

mysqlbackup
–backup-image=/backup/my.mbi
list-image

You will see a list of the contents of the single file backup with each database and table listed.  Here’s an example of the contents of a single file backup named MondayApril112022.mbi:

______________________________________________

mysqlbackup –backup_image=Sept2011.mbi list-image

IMPORTANT: Please check that mysqlbackup run completes successfully.
At the end of a successful ‘list-image’ run mysqlbackup
prints “mysqlbackup completed OK!”.

mysqlbackup: INFO: Backup Image MEB version string: 3.6.0 [01.07.2011 ]
[File]: [Size:             197]: backup-my.cnf
[File]: [Size:           5578]:  meta\backup_create.xml
[File]: [Size:    16777216]: datadir\ibdata1
[File]: [Size:      2097152]: datadir\ibdata1.$_append_$.1
[Dir]: datadir\crazybase
[File]: [Size:                65]: datadir\crazybase\db.opt
[Dir]: datadir\crazybase3
[File]: [Size:                65]: datadir\crazybase3\db.opt
[Dir]: datadir\mysql
[File]: [Size:                35]: datadir\mysql\backup_history.CSM
[File]: [Size:            5557]: datadir\mysql\backup_history.CSV
[File]: [Size:          71260]: datadir\mysql\backup_history.frm
[File]: [Size:                35]: datadir\mysql\backup_progress.CSM
[File]: [Size:            5423]: datadir\mysql\backup_progress.CSV
[File]: [Size:           33370]: datadir\mysql\backup_progress.frm
[File]: [Size:                 0]: datadir\mysql\columns_priv.MYD
[File]: [Size:             4096]: datadir\mysql\columns_priv.MYI
[File]: [Size:            8820]: datadir\mysql\columns_priv.frm
[File]: [Size:            1320]: datadir\mysql\db.MYD
[File]: [Size:            5120]: datadir\mysql\db.MYI
[File]: [Size:            9582]: datadir\mysql\db.frm
[File]: [Size:                 0]: datadir\mysql\event.MYD
[File]: [Size:            2048]: datadir\mysql\event.MYI
[File]: [Size:          10223]: datadir\mysql\event.frm
[File]: [Size:                 0]: datadir\mysql\func.MYD
[File]: [Size:            1024]: datadir\mysql\func.MYI
[File]: [Size:            8665]: datadir\mysql\func.frm
[File]: [Size:                35]: datadir\mysql\general_log.CSM
[File]: [Size:                 0]: datadir\mysql\general_log.CSV
[File]: [Size:             8776]: datadir\mysql\general_log.frm
[File]: [Size:           22078]: datadir\mysql\help_category.MYD
[File]: [Size:             3072]: datadir\mysql\help_category.MYI
[File]: [Size:             8700]: datadir\mysql\help_category.frm
[File]: [Size:            89241]: datadir\mysql\help_keyword.MYD
[File]: [Size:            16384]: datadir\mysql\help_keyword.MYI
[File]: [Size:              8612]: datadir\mysql\help_keyword.frm
[File]: [Size:              8928]: datadir\mysql\help_relation.MYD
[File]: [Size:            18432]: datadir\mysql\help_relation.MYI
[File]: [Size:              8630]: datadir\mysql\help_relation.frm
[File]: [Size:          418976]: datadir\mysql\help_topic.MYD
[File]: [Size:            20480]: datadir\mysql\help_topic.MYI
[File]: [Size:              8770]: datadir\mysql\help_topic.frm
[File]: [Size:                 0]: datadir\mysql\host.MYD
[File]: [Size:              2048]: datadir\mysql\host.MYI
[File]: [Size:              9510]: datadir\mysql\host.frm
[File]: [Size:                84]: datadir\mysql\inventory.MYD
[File]: [Size:              2048]: datadir\mysql\inventory.MYI
[File]: [Size:              8592]: datadir\mysql\inventory.frm
[File]: [Size:                 0]: datadir\mysql\ndb_binlog_index.MYD
[File]: [Size:              1024]: datadir\mysql\ndb_binlog_index.MYI
[File]: [Size:              8778]: datadir\mysql\ndb_binlog_index.frm
[File]: [Size:                 0]: datadir\mysql\plugin.MYD
[File]: [Size:              1024]: datadir\mysql\plugin.MYI
[File]: [Size:              8586]: datadir\mysql\plugin.frm
[File]: [Size:                 0]: datadir\mysql\proc.MYD
[File]: [Size:              2048]: datadir\mysql\proc.MYI
[File]: [Size:              9996]: datadir\mysql\proc.frm
[File]: [Size:                 0]: datadir\mysql\procs_priv.MYD
[File]: [Size:              4096]: datadir\mysql\procs_priv.MYI
[File]: [Size:              8875]: datadir\mysql\procs_priv.frm
[File]: [Size:               693]: datadir\mysql\proxies_priv.MYD
[File]: [Size:              5120]: datadir\mysql\proxies_priv.MYI
[File]: [Size:              8800]: datadir\mysql\proxies_priv.frm
[File]: [Size:                 0]: datadir\mysql\servers.MYD
[File]: [Size:              1024]: datadir\mysql\servers.MYI
[File]: [Size:              8838]: datadir\mysql\servers.frm
[File]: [Size:                35]: datadir\mysql\slow_log.CSM
[File]: [Size:                 0]: datadir\mysql\slow_log.CSV
[File]: [Size:              8976]: datadir\mysql\slow_log.frm
[File]: [Size:                  0]: datadir\mysql\tables_priv.MYD
[File]: [Size:              4096]: datadir\mysql\tables_priv.MYI
[File]: [Size:              8955]: datadir\mysql\tables_priv.frm
[File]: [Size:                 0]: datadir\mysql\time_zone.MYD
[File]: [Size:              8192]: datadir\mysql\time_zone.MYI
[File]: [Size:              8636]: datadir\mysql\time_zone.frm
[File]: [Size:               312]: datadir\mysql\time_zone_leap_second.MYD
[File]: [Size:              2048]: datadir\mysql\time_zone_leap_second.MYI
[File]: [Size:              8624]: datadir\mysql\time_zone_leap_second.frm
[File]: [Size:          111896]: datadir\mysql\time_zone_name.MYD
[File]: [Size:             12288]: datadir\mysql\time_zone_name.MYI
[File]: [Size:              8606]: datadir\mysql\time_zone_name.frm
[File]: [Size:          658733]: datadir\mysql\time_zone_transition.MYD
[File]: [Size:          733184]: datadir\mysql\time_zone_transition.MYI
[File]: [Size:              8686]: datadir\mysql\time_zone_transition.frm
[File]: [Size:            99788]: datadir\mysql\time_zone_transition_type.MYD
[File]: [Size:            38912]: datadir\mysql\time_zone_transition_type.MYI
[File]: [Size:              8748]: datadir\mysql\time_zone_transition_type.frm
[File]: [Size:               376]: datadir\mysql\user.MYD
[File]: [Size:              2048]: datadir\mysql\user.MYI
[File]: [Size:            10630]: datadir\mysql\user.frm
[Dir]: datadir\performance_schema
[File]: [Size:              8624]: datadir\performance_schema\cond_instances.frm
[File]: [Size:                61]: datadir\performance_schema\db.opt
[File]: [Size:              9220]: datadir\performance_schema\events_waits_current.frm
[File]: [Size:              9220]: datadir\performance_schema\events_waits_history.frm
[File]: [Size:              9220]:datadir\performance_schema\events_waits_history_long.frm
[File)]:[Size:              8878]: datadir\performance_schema\events_waits_summary_by_instance.frm
[File]: [Size:              8854]: datadir\performance_schema\events_waits_summary_by_thread_by_event_name.frm
[File]: [Size:              8814]: datadir\performance_schema\events_waits_summary_global_by_event_name.frm
[File]: [Size:              8654]: datadir\performance_schema\file_instances.frm
[File]: [Size:              8800]: datadir\performance_schema\file_summary_by_event_name.frm
[File]: [Size:              8840]: datadir\performance_schema\file_summary_by_instance.frm
[File]: [Size:              8684]: datadir\performance_schema\mutex_instances.frm
[File]: [Size:             8776]: datadir\performance_schema\performance_timers.frm
[File]: [Size:             8758]: datadir\performance_schema\rwlock_instances.frm
[File]: [Size:             8605]: datadir\performance_schema\setup_consumers.frm
[File]: [Size:             8637]: datadir\performance_schema\setup_instruments.frm
[File]: [Size:             8650]: datadir\performance_schema\setup_timers.frm
[File]: [Size:             8650]: datadir\performance_schema\threads.frm
[Dir]: datadir\pets
[File]: [Size:               65]: datadir\pets\db.opt
[Dir]: datadir\test
[File]: [Size:             8560]: datadir\test\names.frm
[Dir]: datadir\world
[File]: [Size:             8652]: datadir\world\bartstations.frm
[File]: [Size:             8710]: datadir\world\city.frm
[File]: [Size:             8630]: datadir\world\citychild.frm
[File]: [Size:             8646]: datadir\world\citylist2.frm
[File]: [Size:             9172]: datadir\world\country.frm
[File]: [Size:             8702]: datadir\world\countrylanguage.frm
[File]: [Size:             8590]: datadir\world\countrylist2.frm
[File]: [Size:             8590]: datadir\world\countryparent.frm
[File]: [Size:               65]: datadir\world\db.opt
[File]: [Size:              741]: datadir\world\europe_view.frm
[File]: [Size:             3584]: datadir\ibbackup_logfile
[File]: [Size:              176]: meta\backup_variables.txt
[File]: [Size:           38562]: meta\backup_content.xml
[File]: [Size:           15236]: meta\image_files.xml
mysqlbackup: INFO:  Backup image contents listed successfully.
Source Image Path= C:\temp\temp\Sep42011.mbi
mysqlbackup completed OK!

______________________________________________

Step 2:  Add Streaming to your Single File Backup
Streaming allows you to write the backup to a different server without ever storing it locally.  This limits the storage space you need on the local database server, and can be faster than copying it locally and then moving the backup to a different server.  You build on the single file option by using it in combination with OS features like pipes, ssh/scp, etc and take your input from standard output.

mysqlbackup
– backup-image=-backup-to-image | ssh user@host command arg1 arg2 …

where command is the combination of command, device, etc used during normal archiving (such as dd or tar)

Step 3:  Stream your Single File Backup to a Media Management System

You can backup to tape with media management software (MMS) like Oracle Secure Backup (OSB), Legato, Netbackup, etc. The MMS must support version 2 or higher of the System Backup to Tape (SBT)  Interface.   To see a list of Oracle partners who use the SBT interface, go to
http://www.oracle.com/technetwork/database/features/availability/bsp-088814.html
This interface was originally developed by Oracle as a standard way for third party backup media providers to integrate their solutions with RMAN, the Oracle Database Recovery Manager and Backup tool.  MySQL Enterprise Backup (MEB) 3.6 now supports this interface so if you are already using a media management solution like Oracle Secure Backup (OSB) or Netbackup, you can stream directly from MEB to the MMS.  Some MySQL sites have hundreds of servers backed up to many physical or virtual tape devices and can produce thousands of backup tapes on a regular basis.  These tapes may be maintained at different locations for various time periods.  An MMS gives you better control over the process and may add capabilities like backup policies, tape vaulting control, and tape encryption.  To interface with a media management system like OSB, you once again build on the single file backup command, but use a prefix on your image filename.  This special prefix, sbt:  sends the backup to the MMS instead of a local file
–backup-image=sbt:name
Your mysqlbackup command would look something like this:

mysqlbackup
–port=3306
–protocol=tcp
–user=lynn
–password
–backup-image=sbt:backup-hr-2011-09-06
–backup-dir=/backup
backup-to-image

Mysqlbackup defaults to the normal operating system paths and environment variables to locate the Secure Backup to Tape (SBT) library it needs to accomplish this.  What if you have more than one MMS?  No problem. You just specify the –sbt-lib-path parameter of the mysqlbackup command with the correct path and filename for your MMS.

Step 4: Encrypt your Backup Tapes using your Media Management System

Encryption adds an extra layer of security to tapes in case they are misplaced or stolen.  MEB does not in itself do encryption, but you can use OSB or another encryption enabled MMS to encrypt MySQL tapes.  In OSB you control encryption by defining a storage selector that applies specific features (like encryption) to a particular backup. You define the storage selector only once.  Then OSB will automatically select the appropriate database backup storage selector for the backup job.
In OSB a storage selector contains the database name, the hostname, and the name of the media to use for backups.  Optionally you can indicate whether encryption should be used, the type of backup (full, incremental) and restrictions on tape devices.  When OSB receives a backup command through MEB, it examines the defined database backup storage selectors to determine whether a backup storage selector matches the attributes of the just received backup job.  OSB uses the database name and backup type (ie full or incremental etc) to select the most appropriate backup storage selector.
OSB storage selectors are created either through Oracle Enterprise Manager or the command line interface to OSB.  In Oracle Enterprise Manager, storage selectors are defined from within the Maintenance tab under Backup Settings.
If you are not using Enterprise Manager, use the OSB command line interface (obtool) to define your storage selector.  Here is an example of a command which creates a storage selector called MySQLworld with encryption for the database worlddb:

mkssel  (stands for make storage selector)
–dbname worlddb
–host myserver2
–content full
–encryption on
MySQLworld

To restore MySQL data from tape , you use the –sbt-backup-name parameter as part of the restore operation.

A few notes about using MEB with OSB:  You must pre-authorize user access for MySQL for the backup to work with OSB.  For instructions, please see section 4.2.2 of the OSB Administrator’s Guide  “Creating a Preauthorized Oracle Secure Backup User”  http://download.oracle.com/docs/cd/E14812_01/doc/doc.103/e12834/osb_rman_backup.htm#BDCCCIIA

In this entry I’ve covered how to use some of the new features in MySQL Enterprise Backup to stream backups and interface with media management systems.  Next week I will cover the new Backup Advisor in MySQL Enteprise Monitor.

Lynn Ferrante has worked with databases in the enterprise for her whole career at MySQL, Oracle, Sybase, and Ingres.  She also worked on an open source project called GenMapp (University of California, San Francisco), and contributed to the development of many database applications in the fields of energy and environment