MySQL Workbench 6.0: Home Screen

The home screen of MySQL Workbench is the first thing you see when you start up the application and it’s therefor an important hub to quickly reach important parts or do repeating tasks like opening certain connections. This article describes the home screen in some detail with additional info and tips.

Continue reading “MySQL Workbench 6.0: Home Screen”

MySQL Workbench Tunneling to Socket File only servers

Brandon Johnson at Mozilla has posted a nice tip on how to use MySQL Workbench SSH tunneling with MySQL servers configured to accept only Unix Socket File connections (no TCP/IP). Head over there for more info about how to use the “socat” utility to bridge a SSH tunnel from WB to a socket only MySQL.

Installing a driver for Microsoft SQL Server and Sybase ASE in Linux and Mac

In a recent post we showed you how to migrate a SQL Server database to MySQL. There, we used the oficial Microsoft ODBC driver and that’s OK if you are running MySQL Workbench in Windows. But what if your desktop OS is some Linux variant or Mac OS X?

It turns out that Microsoft has recently released an ODBC driver for Linux. However, you can’t use this driver with MySQL Workbench for Linux. (Actually you can, but you would have to rebuild Workbench). The main reason is that this ODBC driver was linked against unixODBC (an ODBC driver manager), while Workbench uses another ODBC driver manager: iODBC and the two of them can’t coexist in the same system.

So for Linux and Mac we prefer to go in a different direction and use FreeTDS, an alternate ODBC driver for Microsoft SQL Server. As a bonus, the very same driver can be used for connecting to Sybase Adaptive Server Enterprise, an RDBMS also supported in the MySQL Workbench Migration Wizard.

In spite that you can find FreeTDS in the software repositories of several modern Linux distributions, you are encouraged to download it and compile it yourself. Most distros provide fairly old FreeTDS versions and virtually all of them come linked against unixODBC, which is not supported by Workbench. Fortunately, the process for compiling and installing it is pretty straightforward. Here are the steps you should follow:

  1. Go to the FreeTDS homepage and download a recent version of the driver’s source code. Make sure that you have at least version 0.92. We advise you to use the latest stable version rather than a nightly snapshot. Save this tarball in a directory where you have write permissions (something like /home/<your_user>/freetds would do).
  2. If you are in Linux, make sure that you have installed the essential build tools (C/C++ compiler, linker, etc.) and the development files for iODBC. Try running this from the command line if your are using a Debian based distro like Ubuntu:
    $> sudo apt-get install build-essential libiodbc2-dev

    If your distro uses rpm packages (Red Hat, Fedora, etc.) the command would be like this:

    $> su -
    $> yum install make automake gcc gcc-c++ libiodbc-devel

    Mac users can skip this step since the apropriate development files are already included in their system.

  3. Since there are a couple of options you would have to pass to the driver’s configure script, we have created a script file to help you pass this with no pain. You can find this script file in /usr/share/mysql-workbench/extras/build_freetds.sh if you are in Linux or inMySQLWorkbench.app/Contents/SharedSupport/build_freetds.sh if you use a Mac. Locate this helper script and copy it to the same directory where you put the driver’s tarball.
  4. Now it’s the time to actually build the driver. If you are not already there, cd to the directory where you put the helper script and run the helper script from there:
    $> ./build_freetds.sh

    Wait until the compilation finishes.

  5. Now that the driver is built, you should proceed to install it by running make install as root. And there you go, you should have the FreeTDS driver installed inside the /usr/lib or the /usr/local/lib directory. It will be named libtdsodbc.so.
  6. Proceed to register the installed FreeTDS driver as explained here.

Now you should be ready to use the driver to migrate Microsoft SQL Server and Sybase ASE databases to MySQL using the MySQL Workbench Migration Wizard.

Sergio de la Cruz

How-To: Migrate PostgreSQL databases to MySQL using the MySQL Workbench Migration Wizard

MySQL Workbench 5.2.41 introduced the new Migration Wizard module. This module allows you to easily and quickly migrate databases from various RDBMS products to MySQL. As of Workbench 5.2.44 you can migrate databases from Microsoft SQL Server, PostgreSQL and Sybase Adaptive Server Enterprise. It also provides for generic migrations, i.e. migrations from other RDBMSes that are not explicitely supported, provided that they have a well behaved ODBC driver. More on this in an upcoming post…

Additionally, you can use the Migration Wizard to perform MySQL to MySQL database migrations, which can be used for tasks such as copying a database across servers or migrating data across different versions of MySQL.

We have already described in a previous post how to use the Migration Wizard to migrate a Microsoft SQL Server database to MySQL. In this post we are going to migrate a PostgreSQL database to MySQL using the Migration Wizard.

So lets get our hands dirty and run through the Migration Wizard in order to migrate a PostgreSQL database to MySQL. In the rest of this post I assume that you have:

  • A running PostgreSQL instance in which you have proper access to the database you want to migrate. (I’ll call this database from now on the source database). I have a PostgreSQL instance running in a computer (an Ubuntu 12.04 box) in my local network. I have installed on top of it the Pagila Sample Database from pgFoundry. I’m using the standard postgres user, which has full privileges. You can use whatever PostgreSQL version you have at hand, but keep in mind that the Migration Wizard officially supports PostgreSQL 8.0 and newer so older PostgreSQL versions might not work.
  • A running MySQL Server instance with proper user access. The Migration Wizard supports MySQL versions from 5.0 onwards so make sure you have a supported version. For this tutorial I’m using MySQL Server 5.5.27 CE installed in the very same Ubuntu PC where PostgreSQL is running.
  • MySQL Workbench 5.2.44 or newer. This time I’m going to use Workbench for Linux. I have Workbench running in another computer with Ubuntu 12.10 installed.

Lets start now…

Download, compile (Linux and Mac only) and install a PostgreSQL ODBC driver

You need to install an ODBC driver for PostgreSQL in the machine where you installed MySQL Workbench. Please follow the instructions here.

Open MySQL Workbench and start the Migration Wizard

From the main MySQL Workbench screen you can start the Migration Wizard by clicking on the Database Migration launcher in the Workbench Central panel or through Database > Migrate in the main menu.

wb_initial_screen

A new tab showing the Overview page of the Migration Wizard should appear.

Overview Page of the MySQL Workbench Migration Wizard

Read carefully the Prerequisites section. You can read there that you need an ODBC driver for your source RDBMS installed. If you have installed the psqlODBC driver as explained in the previous section, you are good to go.

Set up the parameters to connect to your source database

Click on the Start Migration button in the Overview page to advance to the Source Selection page. In this page you need to provide the information about the RDBMS you are migrating, the ODBC driver to use and the parameters for the connection. The name of the ODBC driver is the one you set up when you registered your psqlODBC driver with the driver manager (psqlODBC, for example).

If you open the Database System combo box youll find a list of the supported RDBMSes. Select PostgreSQL from the list. Just below it theres another combo box named Stored Connection. It will list saved connection settings for that RDBMS. You can save connections by marking the checkbox at the bottom of the page and giving them a name of your preference.

The next combo box is for the selection of the Connection Method . This time we are going to select ODBC (manually entered parameters) from the list since we are going to manually type the parameters for our PostgreSQL connection. Other alternatives are ODBC data sources and ODBC connection strings.

Now its the time for putting the parameters for your connection. In the Driver text field, type the ODBC driver name from the previous step (psqlODBC). Fill the remaining parameters (Hostname, Port, Username, Password and Database) with the appropriate values. The psqlODBC driver does not allow to connect without specifying a database name, so make sure you put the name of your database before attempting to connect. At this point you should have something like this:

Source Selection Page of the MySQL Workbench Migration Wizard

Click on the Test Connection button to check the connection to your PostgreSQL instance. If you put the right parameters you should see a message reporting a successful connection attempt.

Set up the parameters to connect to your target database

Click on the Next button to move to the Target Selection page. Once there set the parameters to connect to your MySQL Server instance. When you are done click on the Test Connection button and verify that you can successfully connect to it.

Target Selection Page of the MySQL Workbench Migration Wizard

Select the schema(ta) to migrate

Click on the Next button to move to the next page. The Migration Wizard will communicate to your PostgreSQL instance to fetch a list of the schemata in your source database.

Fetching schema list in the MySQL Workbench Migration Wizard

Verify that all tasks have successfully finished and click on the Next button to move forward. You will be given a list of schemata to select the ones to migrate. The Schema Selection page will look like this:

Schema Selection Page of the MySQL Workbench Migration Wizard

Select the Pagila sample database from the list and its default schema public. Now look at the options below. A PostgreSQL database is comprised of one catalog and one or more schemata. MySQL only supports one schema in each database (to be more precise, a MySQL database is a schema) so we have to tell the Migration Wizard how to handle the migration of schemata in our source database. We can either keep all of the schemata as they are (the Migration Wizard will create one database per schema), or merge them into a single MySQL database. The two last options are for specifying how the merge should be done: either remove the schema names (the Migration Wizard will handle the possible name colisions thay may appear along the way) or either adding the schema name to the database object names as a prefix. Lets select the second option since we only have one schema and we are not particularly interested in keeping its meaningless public name.

Select the objects to migrate

Move to the next page using the Next button. You should see the reverse engineering of the selected schema in progress. At this point the Migration Wizard is retrieving relevant information about the involved database objects (table names, table columns, primary and foreign keys, indices, triggers, views, etc.). You will be presented a page showing the progress as shown in the image below.

Reverse Engineering the Source Schemata the MySQL Workbench Migration Wizard

It may take some time, depending on how fast is your connection to the server, your PostgreSQL server load and your local machine load. Wait for it to finish and verify that everything went well. Then move to the next page. In the Source Objects page you will have a list with the objects that were retrieved and are available for migration. It will look like this:

Source Objects Page of the MySQL Workbench Migration Wizard

As you can see the Migration Wizard discovered table objects in our source database. If you click on the Show Selection button you will be given the oportunity to select exactly which of them you want to migrate as shown here:

Source Objects Page of the MySQL Workbench Migration Wizard (expanded)

The items in the list to the right are the ones to be migrated. Note how you can use the filter box to easily filter the list (wildcards are allowed there too). By using the arrow buttons you can filter out the objects that you dont want to migrate. At the end, dont forget to clear the filter text box to check the full list of the selected objects. We are going to migrate all of the table objects, so make sure that all of them are in the Objects to Migrate list and that the Migrate Table Objects checkbox is checked. Most of the time youll want to migrate all objects in the schema anyway, so you can just click Next.

Review the proposed migration

Move to the next page. You will see the progress of the migration there. At this point the Migration Wizard is converting the objects you selected into their equivalent objects in MySQL and creating the MySQL code needed to create them in the target server. Let it finish and move to the next page. You might have to wait a little bit before the Manual Editing page is ready but you’ll end up with something like this:

Manual Editing Page of the MySQL Workbench Migration Wizard

As you can see in the image above there is a combo box named View . By using it you can change the way the migrated database objects are shown. Also take a look at the Show Code and Messages button. If you click on it you can see (and edit!) the generated MySQL code that corresponds to the selected object. Furthermore, you can double click in a row in the object tree and edit the name of the target object. Suppose you want your resultant database to have another name. No problem: double click on the Pagila row and rename it.

Manual Editing Page of the MySQL Workbench Migration Wizard (All Objects)

An interesting option in the View combo box is the Column Mappings one. It will show you all of the table columns and will let you individually review and fix the mapping of column types, defalt values and other attributes.

Manual Editing Page of the MySQL Workbench Migration Wizard (Column Mappings)

Run the resultant MySQL code to create the database objects

Move to the Target Creation Options page. It will look like this:

Target Creation Page of the MySQL Workbench Migration Wizard

As you can see there, you are given the options of running the generated code in the target RDBMS (your MySQL instance from the second step) or just dumping it into a SQL script file. Leave it as shown in the image and move to the next page. The migrated PostgreSQL code will be executed in the target MySQL server. You can view its progress in the Create Schemata page:

Create Schemata Page of the MySQL Workbench Migration Wizard

Once the creation of the schemata and their objects finishes you can move to the Create Target Results page. It will present you a list with the created objects and whether there were errors while creating them. Review it and make sure that everything went OK. It should look like this:

Create Target Results Page of the MySQL Workbench Migration Wizard

You can still edit the migration code using the code box to the right and save your changes by clicking on the Apply button. Keep in mind that you would still need to recreate the objects with the modified code in order to actually perform the changes. This is done by clicking on the Recreate Objects button. You may need to edit the generated code if its execution failed. You can then manually fix the SQL code and re-execute everything. In this tutorial we are not changing anything, so leave the code as it is and move to the Data Transfer Setup page by clicking on the Next button.

Transfer the data to the MySQL database

The next steps in the Migration Wizard are for the transference of data from the source SQL Server database into your newly created MySQL database. The Data Transfer Setup page allows you to configure this process.

Data Transfer Setup Page of the MySQL Workbench Migration Wizard

There are two sets of options here. The first one allows you to perform a live transference and/or to dump the data into a batch file that you can run later. The other set of options gives you a way to tune up this process.

Leave the default values for the options in this page as shown in the above image and move to the actual data transference by jumping to the next page. It will take a little while to copy the data. At this point the corresponding progress page will look familiar:

Creating Target Schemata in the MySQL Workbench Migration Wizard

Once it finishes, move to the next page. You will be presented a report page summarizing the whole process:

Report Page of the MySQL Workbench Migration Wizard

And that should be it. Click on the Finish button to close the Migration Wizard.

A little verification step

Now that the Pagila database was successfully migrated, let’s see the results. Open an SQL Editor page associated with your MySQL Server instance and query the Pagila database. You can try something like SELECT * FROM pagila.actors. You should get something like this:

Querying the target database to verify the migration

Conclusions

By now you should have a pretty good idea of the capabilities of the Migration Wizard and should be ready to use it for your own migrations. The official documentation is also there for you and you can always ask any question in the comments of this post or in the migration official forum. Live long and prosper!

Sergio de la Cruz

How-To: Guide to Database Migration from Microsoft SQL Server using MySQL Workbench

MySQL Workbench 5.2.41 introduces a new Migration Wizard module. This module allows you to easily and quickly migrate databases from various RDBMS products to MySQL. In this initial version, migrations from Microsoft SQL Server are supported, but it should also be possible to migrate from most ODBC capable RDBMS as well, using its generic RDBMS support. Additionally, you can use it to perform MySQL to MySQL database copies, which can be used for tasks such as copying a database across servers or migrating data across different versions of MySQL.

So let’s get our hands dirty and run through the Migration Wizard in order to migrate a Microsoft SQL Server database to MySQL. In the rest of this post I assume that you have:

  • A running SQL Server instance in which you have proper access to the database you want to migrate. (I’ll call this database from now on the source database). I have a remote SQL Server 2000 instance available and the sample Northwind database on top of it. I’m using the standard “sa” user, which has full privileges. You can use whatever SQL Server version you have at hand. Keep in mind that the Migration Wizard officially supports SQL Server 2000 and newer so older SQL Server versions might not work.
  • A running MySQL Server instance with proper user access. The Migration Wizard supports MySQL versions from 5.0 onwards so make sure you have a supported version. For this tutorial I’m using MySQL Server 5.5.15 CE installed in the same PC where MySQL Workbench is running.
  • MySQL Workbench 5.2.41 or newer for Windows. The Migration Wizard is also available in the Linux and Mac versions of MySQL Workbench, but running it from Windows will save us from installing an ODBC driver to connect to our SQL Server instance. Other blog posts will follow on how to proceed in those cases.

Let’s start now…

 Open MySQL Workbench and start the Migration Wizard

From the main MySQL Workbench screen you can start the Migration Wizard by clicking on the Database Migration launcher in the Workbench Central panel or through Database –> Migrate in the main menu.

wb_initial_screenA new tab showing the Overview page of the Migration Wizard should appear.

2wb_migration_wizard_overviewRead carefully the Prerequisites section. You can read there that you need an ODBC driver for your source RDBMS installed.  Any recent version of Windows comes with some ODBC drivers installed. For Windows 2000 and earlier these can be installed with the Microsoft Data Access Components (MDAC).

You should check if you have an ODBC driver for SQL Server. Start the Windows ODBC Data Source Administrator from MySQL Workbench using the Plugins –> Start ODBC Administrator menu item or just open a Windows terminal and type odbcad32.exe. Once there, go to the Drivers tab. You should see something like this:

3windows_odbc_data_sources1

As you can see, I already have two SQL Server ODBC drivers installed. The first one listed here (named “SQL Server”) comes preinstalled with Windows (you should have it as well). This driver is frozen at the level of functionality provided by SQL Server 2000 and it should be enough for you if your database doesn’t make use of the new features and datatypes introduced after this SQL Server version. If you have a SQL Server instance in the same machine where you installed MySQL Workbench then you should also have the second driver listed in the image (named “SQL Server Native Client…”). This one comes with SQL Server and fully supports the companion SQL Server version. If you don’t have it listed, you can download and install the Microsoft SQL Server 2012 Native Client. This is compatible with SQL Server 2012 as well as with previous SQL Server versions.

Once you take your pick on the driver to use, write down somewhere its name as shown in the ODBC Data Source Administrator. You’ll need this name to connect to your SQL Server instance from the Migration Wizard. Let’s go back to the Migration Wizard (you can close the ODBC Data Source Administrator now) and start the migration process.

Set up the parameters to connect to your source database

Click on the Start Migration button in the Overview page to advance to the Source Selection page. In this page you need to provide the information about the RDBMS you are migrating, the ODBC driver to use and the parameters for the connection.

If you open the Database System combo box you’ll find a list of the supported RDBMSes. Select Microsoft SQL Server from the list. Just below it there’s another combo box namedStored Connection. It will list saved connection settings for that RDBMS. You can save connections by marking the checkbox at the bottom of the page and giving them a name of your preference.

The next combo box is for the selection of the Connection Method. This time we are going to select ODBC (native) from the list since we are using the native ODBC drivers provided by Microsoft. Other alternatives are ODBC data sources and ODBC FreeTDS (FreeTDS is a popular open source driver for Microsoft SQL Server and Sybase).

Now it’s the time for putting the parameters for your connection. In the Driver text field, type the ODBC driver name from the previous step.

In the Server field put the values that identify your machine and your SQL Server instance name. If you don’t recall these, go to SQL Server Management Studio and connect to your server. Then right click on the server icon in the Object Explorer and the name will be displayed in the new window that appears. If you have SQL Server Express Edition installed in your local machine and you haven’t change the server name, then the default“localhost\SQLEXPRESS” should work for you. Another option is to put your server IP address instead of the host name. You can also specify a port by adding a comma after the server name/IP (E.g “127.0.0.1,1433”). The instance name is optional and defaults to the default SQL Server instance in the source host.

Now put your credentials (user name and password) to connect to the server. If you know the name of the database you want to migrate, put it in the Database field. Otherwise leave it blank and you will be later given a list to select your database there. At this point you should have something like this:

4wb_migration_wizard_source_selection3

Click on the Test Connection button to check the connection to your SQL Server instance. If you put the right parameters you should see a message reporting a successful connection attempt.

Set up the parameters to connect to your target database

Click on the Next button to move to the Target Selection page. Once there set the parameters to connect to your MySQL Server instance. When you are done click on the Test Connection button and verify that you can successfully connect to it.

5wb_migration_wizard_target_selection

Select the schema(ta) to migrate

Click on the Next button to move to the next page. The Migration Wizard will communicate to your SQL Server instance to fetch a list of the catalogs and schemata. If you left blank theDatabase field in the Source Selection page it will retrieve all of the catalogs in the server. Otherwise it will just fetch the schemata corresponding to the catalog you explicitly typed.

6wb_migration_wizard_schemata_fetching

Verify that all tasks finished successfully and click on the Next button to move forward. You will be given a list of catalogs and their corresponding schemata to select the ones to migrate. Keep in mind that you can only migrate schemata from one catalog at a time. The Schema Selection page will look like this:

7wb_migration_wizard_schemata_selection1

Select the Northwind sample database from the list and its default schema dbo. Now look at the options below. A SQL Server database is comprised of one catalog and one or more schemata. MySQL only supports one schema in each database (to be more precise, a MySQL database is a schema) so we have to tell the Migration Wizard how to handle the migration of schemata in our source database. We can either keep all of the schemata as they are (the Migration Wizard will create one database per schema), or merge them into a single MySQL database. The two last options are for specifying how the merge should be done: either remove the schema names (the Migration Wizard will handle the possible name colisions thay may appear along the way) or either adding the schema name to the database object names as a prefix. Let’s select the second option since we only have one schema and we are not particularly interested in keeping its meaningless dbo name.

Select the objects to migrate

Move to the next page using the Next button. You should see the reverse engineering of the selected schema in progress. At this point the Migration Wizard is retrieving relevant information about the involved database objects (table names, table columns, primary and foreign keys, indices, triggers, views, etc.). You will be presented a page showing the progress as shown in the image below.

8wb_migration_wizard_reverse_engineering

It may take some time, depending on how fast is your connection to the server, your SQL Server load and your local machine load. Wait for it to finish and verify that everything went well. Then move to the next page. In the Source Objects page you will have a list with the objects that were retrieved and are available for migration. It will look like this:

9wb_migration_wizard_object_selection1

As you can see the Migration Wizard discovered table and view objects in our source database. Note that only the table objects are selected by default to be migrated. You can select the view objects too, but you would have to provide their corresponding MySQL equivalent code later (no automatic migration is available for them yet) so let’s leave them off for now. The same applies for stored procedures, functions and triggers.

If you click on the Show Selection button you will be given the oportunity to select exactly which of them you want to migrate as shown here:

10wb_migration_wizard_object_selection_expanded1

The items in the list to the right are the ones to be migrated. Note how you can use the filter box to easily filter the list (wildcards are allowed as you can see in the image above). By using the arrow buttons you can filter out the objects that you don’t want to migrate. At the end, don’t forget to clear the filter text box to check the full list of the selected objects. We are going to migrate all of the table objects, so make sure that all of them are in the Objects to Migrate list and that the Migrate Table Objects checkbox is checked. Most of the time you’ll want to migrate all objects in the schema anyway, so you can just click Next.

Review the proposed migration

Move to the next page. You will see the progress of the migration there. At this point the Migration Wizard is converting the objects you selected into their equivalent objects in MySQL and creating the MySQL code needed to create them in the target server. Let it finish and move to the next page. You might have to wait a little bit before the Manual Editing page is ready but you’ll end up with something like this:

11wb_migration_wizard_object_editing1

As you can see in the image above there is a combo box named View. By using it you can change the way the migrated database objects are shown. Also take a look at the Show Code and Messages button. If you click on it you can see (and edit!) the generated MySQL code that corresponds to the selected object. Furthermore, you can double click in a row in the object tree and edit the name of the target object. Suppose you want your resultant database to have another name. No problem: double click on the Northwind row and rename it.

12wb_migration_wizard_object_editing_expanded1

An interesting option in the View combo box is the Column Mappings one. It will show you all of the table columns and will let you individually review and fix the mapping of column types, default values and other attributes.

13wb_migration_wizard_object_editing_expanded2

Run the resultant MySQL code to create the database objects

Move to the Target Creation Options page. It will look like this:

14wb_migration_wizard_target_creation_options2

As you can see there, you are given the options of running the generated code in the target RDBMS (your MySQL instance from the second step) or just dumping it into a SQL script file. Leave it as shown in the image and move to the next page. The migrated SQL code will be executed in the target MySQL server. You can view its progress in the Create Schemata page:

15wb_migration_wizard_target__creation

Once the creation of the schemata and their objects finishes you can move to the Create Target Results page. It will present you a list with the created objects and whether there were errors while creating them. Review it and make sure that everything went OK. It should look like this:

16wb_migration_wizard_create_target_results1

You can still edit the migration code using the code box to the right and save your changes by clicking on the Apply button. Keep in mind that you would still need to recreate the objects with the modified code in order to actually perform the changes. This is done by clicking on the Recreate Objects button. You may need to edit the generated code if its execution failed. You can then manually fix the SQL code and re-execute everything. In this tutorial we are not changing anything, so leave the code as it is and move to the Data Transfer Setup page by clicking on the Next button.

Transfer the data to the MySQL database

The next steps in the Migration Wizard are for the transference of data from the source SQL Server database into your newly created MySQL database. The Data Transfer Setup page allows you to configure this process.

17wb_migration_data_transfer_setup

There are two sets of options here. The first one allows you to perform a live transference and/or to dump the data into a batch file that you can run later. The other set of options gives you a way to tune up this process.

Leave the default values for the options in this page as shown in the above image and move to the actual data transference by jumping to the next page. It will take a little while to copy the data. At this point the corresponding progress page will look familiar:

18wb_migration_data_transfer_progressOnce it finishes, move to the next page. You will be presented a report page summarizing the whole process:

19wb_migration_report

And that should be it. Click on the Finish button to close the Migration Wizard.

A little verification step

Now that the Northwind database was successfully migrated, let’s see the results. Open an SQL Editor page associated with your MySQL Server instance and query the Northwind database. You can try something like “SELECT * FROM Northwind.categories”. You should get something like this:

20wb_northwind_sqlide

And why not create an EER diagram from the migrated database? Click on Create EER Model From Existing Database in the main screen of MySQL Workbench and follow through the wizard steps. Once done go to Arrange –> Autolayout in the main menu to accomodate your EER model and you should get something like this:

21wb_northwind_eer.PNG

Conclusions

By now you should have a pretty good idea of the capabilities of the Migration Wizard and should be ready to use it for your own migrations. The official documentation is also there for you and you can always ask any question in the comments of this post or in the migration official forum. Live long and prosper!

Sergio de la Cruz

Building MySQL Workbench from sources on Ubuntu/Debian

To build MySQL Workbench one would need to install dependencies, fetch source code, configure it and actually do a build. Here’s how to do it on a Ubuntu/Debian system

 

1) Get source code

Open the browser and go to http://www.mysql.com/downloads/workbench/

When in that page, select “Source Code” from the select box. Usually the TAR file will do the job, so download the “Generic Linux” option and uncompress it:

$> tar xf mysql-workbench-community-<version>-src.tar.gz

 

2) Install dependencies necessary to build

We’ll need to remove iodbc, if it’s installed

$> sudo apt-get remove iodbc

 

Now, we can install the full set of dependencies needed to build Workbench

$> sudo apt-get install build-essential cmake cmake-data autoconf automake pkg-config libtool libzip-dev libxml2-dev libsigc++-2.0-dev libglade2-dev libgtkmm-2.4-dev libglu1-mesa-dev libgl1-mesa-glx mesa-common-dev libmysqlclient-dev libmysqlcppconn-dev uuid-dev libpixman-1-dev libpcre3-dev libgnome2-dev libgnome-keyring-dev libgtk2.0-dev libpango1.0-dev libcairo2-dev python-dev libboost-dev libctemplate-dev mysql-client python-pysqlite2 libsqlite3-dev libtinyxml-dev swig libvsqlitepp-dev libgdal-dev libproj0

 

3) Prepare to build

Go to the directory where you uncompressed the TAR file

$> cd mysql-workbench-community-<version>-src

Now, create the directory where the build files will reside and go there

$> mkdir wb-build

$> cd wb-build

 

4) Build and install

Build and install Workbench, and optionally set the install prefix (defaults to /usr/local)

$> cmake [-DCMAKE_INSTALL_PREFIX=/usr] ..
$> make
$> sudo make install

If you don’t want to install it on the system directories or don’t have permissions to do so, try to install on the current directory. Be aware that the Workbench launch script must be updated to the proper directories

$> make install DESTDIR=.

 

Notes:

  • You will need swig 1.3 to build WB
  • Antlr 3.4 is needed, but the current release has many bugs. You will need it patched with https://github.com/antlr/antlr3/pull/43 if you don’t want to use the bundled version. Then you’ll need to

          export ANTLR_JAR_PATH=<path_to_antlr_complete_file>

  • Instead of iodbc you can use unixodbc, to do that you need to install unixodbc and add -DUSE_UNIXODBC=True to cmake
  • Ubuntu Packages require unixodbc, so if you’re using it, there won’t be any problem. If you want to use iodbc, you’ll need to build it from source.
  • On a Core2 Quad 2.4 GHz and a 4G of RAM it takes about 30-40 minutes to build Workbench. Also it uses about 5G of hard drive space to build and install.

Workbench and MySQL server at non-standard location in Linux.

Recently I had to test MySQL Workbench against fresh version of the server. Naturally, the default mysql-server was already installed from repository of my Linux distro. So I installed the latest server version into my /opt/server directory. And then I had several points to resolve, for example, how to start detached server process from Workbench Adminstrator; how to detect if the server is running or not, given that there are many of them running; how to stop the server.

Now I will show several workarounds to perform tasks listed in the previous paragraph. Below are the commands I put into Server Instance Editor, there are corresponding text entries labeled: ‘Start MySQL’, ‘Stop MySQL’, ‘Check MySQL Status’:

Staring server – (nohup /opt/server/5.5.9/bin/mysql_safe –defaults-file=/opt/server/5.5.9/my.cnf & disown %1)
Note! the command above must be used including parentheses
Stopping server – kill `cat /opt/server/5.5.9/mysql.pid`
Note! the command contains backticks
Checking MySQL Status – ps ax | wba_filter(/opt/server/5.5.9/my.cnf

Also I had to tweak /opt/server/5.5.9/my.cnf, here is the content of the file:
[mysqld]
port = 3308
basedir = /opt/server/5.5.8
datadir = /opt/server/5.5.8/data
tmpdir = /opt/server/5.5.8

socket = /opt/server/5.5.8/mysql.sock
pid-file = /opt/server/5.5.8/mysql.pid
log-error = /opt/server/5.5.8/mysql.error.log

That is all I had to change to make the server installed in custom location work from WBA.

MySQL Workbench: Manage MySQL on Windows Servers the Windows way

The MySQL team has been continuously improving its products on the Windows platform. Along this line, we’ve responded to a request from our users of Workbench on Windows – to provide remote access to Windows Servers using Windows management methods – as an alternative  to SSH.

Managing a MySQL server obviously requires access to the target machine, which usually requires elevated rights for certain tasks like restarting the server or manipulating the configuration file on Windows (where this file is in a protected path). For local connections this is mostly not a big deal. However for remote boxes security measures prevent easy manipulation of such essential things like server processes. In this blog post we discuss native Windows management and how it can be used in MySQL Workbench.

Continue reading “MySQL Workbench: Manage MySQL on Windows Servers the Windows way”

MySQL Workbench: Introducing Utilities

MySQL has the well earned reputation for ease-of-use and “15-minutes-to-success”, since we continually focus making the server easy to use. MySQL Workbench provides the visual tools for database design, development, and administration. However, many DBAs prefer using the command-line, and there are many tasks that require the creation scripts for doing the job.

To make it easier to work with the server, the latest release of the MySQL Workbench—version 5.2.31—contain a set of Python scripts intended to make the life easier for DBAs by providing easy-to-use utilities for common tasks, which were introduced in the blog MySQL Workbench: Utilities. The set currently consists of just a few utilities, but will expand over time.

The utilities available in the Workbench are:

mysqldbcopy
Copy databases between servers.
mysqldbexport
Export databases to a file in different formats, including: SQL, comma-separated files, and tab-separated files (and some more).
mysqldbimport
Import object definitions and/or data from a file—in different formats, similar to mysqldbexport—into a database.
mysqlindexcheck
Check for redundant or duplicate indexes on a list of tables or databases. It can also generate DROP statements to
remove redundant indexes.
mysqlmetagrep

Search MySQL servers for objects containing fields matching a pattern.

mysqlprocgrep

Search MySQL servers for processes matching a pattern and perform actions.

mysqlreplicate

Setup replication between two servers.

mysqlserverclone

Start a new instance of a server to experiment with. This is used to test the utilities, but can be used whenever you need to set up a scratch server to test something.
mysqluserclone

Copy a MySQL user to one or more new users on another server

Finding stuff on servers with ease

In order to search for various things on servers—for example, searching objects and processes—there are two commands that can be used: mysqlprocgrep and mysqlmetagrep. The name “grep” is borrowed from Unix where the grep(1) commands that can be used to search inside files, but in this case you can search among processes and inside metadata on a server.

Example: searching processes using mysqlprocgrep

With mysqlprocgrep you can find all processes that match certain conditions and either just print them out, or kill either the connection or the query. So, for instance, to see all connections that have been idle for more than 2 minutes, we can use:

$ mysqlprocgrep --server=root:password@localhost --match-command=sleep --age=+2m
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+
| Connection             | Id  | User  | Host       | Db    | Command  | Time  | State  | Info  |
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+
| root:*@localhost:3306  | 39  | mats  | localhost  | None  | Sleep    | 248   |        | None  |
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+

In the example above, I would especially like you to note how the server connection information is provided. To provide information on what server to connect to and how, we have introduced a new syntax to represent the connection information which we call a connection specification:

user:password@host:port:socket

In the connection specification, the password, port, and socket are optional and can be excluded, which will make the default being used instead.

This is used instead of having separate switches (for example, –host and –user) that the server client programs are using. In addition to being easier to write, it also means that it is possible to provide multiple servers to command (where it makes sense, of course). You might be interested in knowing that both mysqlprocgrep and mysqlmetagrep accepts multiple servers.

If you now want to kill these idle connections, you can just add the –kill-connection option to the command, and the connection of all matching processes will be killed.

$ mysqlprocgrep --server=root:password@localhost \
> --match-command=sleep --age=+2m

In a similar way, if you have a long-running update from a special user (say, www-data), you can kill all the queries in one shot using the command:

$ mysqlprocgrep --server=root:password@localhost \
> --match-user=www-data --match-command=query    \
> --match-state=updating --age=+1m               \
> --kill-query

Example: finding objects using mysqlmetagrep

At times, you also find some odd reference to a column or index, you’re not quite sure, or you want to find out what objects are using a column named ‘db’. In those cases, mysqlmetagrep comes in handy.

The utility is used to find all objects that contain a field that matches the provided pattern. The pattern can be given either as a SQL simple pattern as defined by the SQL standard (this is what you usually use with LIKE), or using POSIX regular expressions (which is what you usually use with REGEXP in SQL). The default is to use the SQL simple pattern. So, to search for any objects having a column ‘host’, we can
use the command:

$ mysqlmetagrep --server=root:password@localhost --pattern=host --search=column
+------------------------+--------------+---------------+---------------------+-------------+----------+
| Connection             | Object Type  | Object Name   | Database            | Field Type  | Matches  |
+------------------------+--------------+---------------+---------------------+-------------+----------+
| root:*@localhost:3306  | TABLE        | PROCESSLIST   | information_schema  | COLUMN      | HOST     |
| root:*@localhost:3306  | TABLE        | columns_priv  | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | db            | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | host          | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | host          | mysql               | TABLE       | host     |
| root:*@localhost:3306  | TABLE        | procs_priv    | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | servers       | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | tables_priv   | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | user          | mysql               | COLUMN      | Host     |
+------------------------+--------------+---------------+---------------------+-------------+----------+

Since the SQL simple patterns are default, this require an exact match and you will only find objects with columns exactly named ‘host’. To find all column containing the word ‘host’, you have to add wildcards to the pattern:

$ mysqlmetagrep --server=root:password@localhost --pattern=%host% --search=column
+------------------------+--------------+---------------+---------------------+-------------+------------+
| Connection             | Object Type  | Object Name   | Database            | Field Type  | Matches    |
+------------------------+--------------+---------------+---------------------+-------------+------------+
| root:*@localhost:3306  | TABLE        | PROCESSLIST   | information_schema  | COLUMN      | HOST       |
| root:*@localhost:3306  | TABLE        | columns_priv  | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | db            | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | general_log   | mysql               | COLUMN      | user_host  |
| root:*@localhost:3306  | TABLE        | host          | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | procs_priv    | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | servers       | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | slow_log      | mysql               | COLUMN      | user_host  |
| root:*@localhost:3306  | TABLE        | tables_priv   | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | user          | mysql               | COLUMN      | Host       |
+------------------------+--------------+---------------+---------------------+-------------+------------+

Creating and configuring servers with ease

There are three utilites that I will just mention briefly, because they are not very complicated to use: mysqlserverclone, mysqlreplicate, and mysqluserclone.

To create a scratch servers using mysqlserverclone for testing something, it is as easy as:

$ mysqlserverclone --server=root:password@localhost \
> --new-data=/tmp/data
# WARNING: Root password for new instance has not been set.
# Cloning the MySQL server running on localhost.
# Creating new data directory...
# Configuring new instance...
# Locating mysql tools...
# Setting up empty database and mysql tables...
# Starting new instance of the server...
# Testing connection to new instance...
# Success!
#...done.

It will create a new server from the original, copy the existing databases, and and start the server. You can supply a new port using the –new-port, but if you do not do that, it will pick the default port 3307.

If you want to set up replication quickly and easily, you can do that using mysqlreplicate:

$ mysqlreplicate --master=root:password@localhost \
> --slave=root@localhost:3307 --rpl-user=repl_user:xyzzy
# master on localhost: ... connected.
# slave on localhost: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

When setting up replication, the mysqlreplicate does some basic checking to ensure that replication will work. It checks that there is a server ID assigned and also checks that binary logging is enabled. If something is not right, it will abort the setup and report error.

The last utility that is useful in setting servers up is mysqluserclone. The utility is used to create new users based on an existing one. So, to create a new user ‘chuck’ with password ‘xyzzy’ on localhost from an existing user ‘mats@localhost’,
you can use the command:

$ mysqluserclone --source=root:password@localhost \
> --destination=root:password@localhost \
> mats@localhost chuck:xyzzy@localhost
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Cloning 1 users...
# Cloning mats@localhost to user chuck:xyzzy@localhost
# ...done.

Moving stuff around with ease

There are three utilities that can be used to move data around: mysqldbcopy, mysqldbexport, and mysqldbimport.

With mysqldbcopy, you can copy a database from one server to another, or several databases from one server to another. When copying a database, it not only copies the table definitions, but all associated objects such as triggers, events, routines, and also database-level grants.

With mysqldbexport you can export one or more databases into various formats, including (but not limited to) pure SQL, comma- and tab-separated values, and also a nice human-readable table.

With mysqldbimport you can import data in files into a database. In contast to using LOAD DATA INFILE, this will generate the INSERT statements to inject the data into the server.

The road ahead

The current set of utilities are just a small start, and we expect more utilities to be added over time and also improve on the existing utilities, so if you want to help, you can do that by:

We are very interested in feedback of any form—bug reports, suggestions for new utilities, suggestions for improving the existing utilities—so if you are a MySQL Expert DBA:

  • Let us know how to improve our utilities
  • Send us suggestions or ideas for new utilities
  • Write your own utilities
  • Contribute patches and/or new utilities

and if you are a Python programmer and/or developer:

  • Let us know how to be more Pythonic
  • Suggest improvements of the code
  • Build and/or contribute additional utilities on top of ones we provide

MySQL Workbench Plugin: Auto-Create Foreign Keys

To automatically create Foreign Key relationships is a typical use case for developers working with the MyISAM storage engine. This has been a popular topic on the Blog and forums so we’re revisiting it here – with a new and improved plugin written in Python.

While the InnoDB storage engine supports foreign keys, MyISAM doesn’t, but developers often pick it for various reasons and leave the application to handle relationships itself. MySQL Workbench can be used to reverse engineer a database to a model to better visualize or maintain it. But since MyISAM doesn’t have foreign keys, databases that use it will be lacking a very important part of it’s structure in the diagrams. You can link the columns manually, using the relationship tool to link columns, but you can also automate that. Databases are usually created so that columns that represent relationships have names that follow some kind of convention or pattern. For example, a city table that is related to a country table, may have a column called country_id, used as the foreign key. The pattern there would be something like <table_name>_id. We can find all such pairs of columns between potential foreign keys and primary keys and create a foreign key for the tables.

There are two core routines needed by this implementation:

  • the first is to find candidate columns. That is, columns that could be foreign keys that reference primary keys of other tables, according to some pattern.
  • the second is the code to actually create the foreign keys from the possible columns found previously.

Look for Candidates

The following is the code to find candidate columns:

def get_fk_candidate_list(schema, fk_name_format, match_types=False):
    candidate_list = []
    possible_fks = {}
    # create the list of possible foreign keys out of the list of tables
    for table in schema.tables:
        if table.primaryKey and len(table.primaryKey.columns) == 1: # composite FKs not supported
            format_args = {'table':table.name, 'pk':table.primaryKey.columns[0].name}
            fkname = fk_name_format % format_args
            possible_fks[fkname] = table

    # go through all tables in schema again, this time to find columns that seem to be a fk
    for table in schema.tables:
        for column in table.columns:
            if possible_fks.has_key(column.name):
                ref_table = possible_fks[column.name]
                ref_column = ref_table.primaryKey.columns[0].referencedColumn
                if ref_column == column:
                    continue
                if match_types and ref_column.formattedType != column.formattedType:
                    continue

                candidate_list.append((table, column, ref_table, ref_column))
    return candidate_list

First, it will go through the list of all tables in the given schema and create a dictionary of possible foreign key column names, according to a format string provided by the user. The format string has the %(table)s and %(pk)s variables replaced with the table name and primary key column name.

With the dictionary of possible foreign key names at hand, it then goes through all columns of all tables looking for any column name that is in the dictionary. If a match is found, a tuple of table, column, referenced table and referenced column names are added to a list of candidates. If the match_types flag is True, it will also check if the column types match and discard anything that doesn’t.

Create Foreign Keys

With the list of candidate columns, we can create a foreign key object from the table column to its referenced column.

for table, column, ref_table, ref_column in candidates:
    fk = table.createForeignKey(ref_column.name+"_fk")
    fk.referencedTable = ref_table
    fk.columns.append(column)
    fk.referencedColumns.append(ref_column)

According to the db_Table documentation, table objects have a convenient createForeignKey method, which takes the foreign key name as an argument, and returns a new db_ForeignKey object added to the table. The foreign key is empty, so we set its referencedTable field and add the column/referenced column pair to the columns and referencedColumns lists, respectively.

Adding a GUI

Now, for a fancier version, we will create a dialog that takes the naming pattern from the user, shows the list of candidates and creates the foreign keys when a button is clicked:

This GUI version uses the internal mforms toolkit. It provides a native interface in any of the supported platforms. See the documentation for it here.

Here is the part of the code that creates the UI. You can use it as a template for your own plugin dialogs. Go to the end of the post for the full plugin code.

import mforms

class RelationshipCreator(mforms.Form):
  def __init__(self):
    mforms.Form.__init__(self, None, mforms.FormNone)

    self.set_title("Create Relationships for Tables")

    box = mforms.newBox(False)
    self.set_content(box)
    box.set_padding(12)
    box.set_spacing(12)

    label = mforms.newLabel(
"""This will automatically create foreign keys for tables that match
a certain column naming pattern, allowing you to visualize relationships
between MyISAM tables.

To use, fill the Column Pattern field with the naming convention used for
columns that are meant to be used as foreign keys. The %(table)s and %(pk)s
variable names will be substituted with the referenced table values.""")
    box.add(label, False, True)

    hbox = mforms.newBox(True)
    hbox.set_spacing(12)
    box.add(hbox, False, True)

    label = mforms.newLabel("Column Pattern:")
    hbox.add(label, False, True)
    self.pattern = mforms.newTextEntry()
    hbox.add(self.pattern, True, True)
    self.matchType = mforms.newCheckBox()
    self.matchType.set_text("Match column types")
    hbox.add(self.matchType, False, True)
    self.matchType.set_active(True)
    search = mforms.newButton()
    search.set_text("Preview Matches")
    search.add_clicked_callback(self.findMatches)
    hbox.add(search, False, True)

    self.pattern.set_value("%(table)s_id")

    self.candidateTree = mforms.newTreeView(mforms.TreeShowHeader)
    self.candidateTree.add_column(mforms.StringColumnType, "From Table", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Column", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Type", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "To Table", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Column", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Type", 100, False)
    self.candidateTree.end_columns()
    box.add(self.candidateTree, True, True)

    hbox = mforms.newBox(True)
    hbox.set_spacing(12)
    self.matchCount = mforms.newLabel("")
    hbox.add(self.matchCount, False, True)
    self.cancelButton = mforms.newButton()
    self.cancelButton.set_text("Cancel")
    hbox.add_end(self.cancelButton, False, True)
    self.okButton = mforms.newButton()
    self.okButton.set_text("Create FKs")
    hbox.add_end(self.okButton, False, True)
    self.okButton.add_clicked_callback(self.createFKs)
    box.add(hbox, False, True)

    self.set_size(700, 600)

The dialog is implemented as a subclass of the mforms.Form class. That is the class for creating a window.

Line 5 calls the __init__ method of mforms.Form. Nothing fancy here, as we just want a plain normal window. Line 7 which follows, sets the title of the window.

Line 9, mforms.newBox(False) is used to create a box layouter that is “not horizontal” (ie, vertical). This is used to layout controls that are added to it from top to bottom, in a single column. Line 10 makes the window display this box as its first control. Anything else you want displayed in the window must be added to this box, either as a direct child or nested in other layouters children of this one.
Lines 11 and 12 set a padding around the box and a spacing between each item inside it, so we have a not so cluttered appearance in our dialog.

Line 14 creates a text label control with some rather lengthy description text, which is then added to the box we created above. The 2nd argument to the add method tells the layouter to not expand the added control. That is, the label will allocate as much space as it needs to show all its contents. If it was set to True, it would instead use all the space left in its container. The 3rd argument tells the layouter to fill the space allocated for it with the control. Since expand is False in this case, this won’t make much difference, but if it was True, it would toggle whether the label should have the same size as the space allocated for it or not. Note the difference between allocated space and actually used space.

Line 24 creates another box, this time a horizontal one, which is then added to the previously created vertical box. Anything added to this box will be laid out as a single row inside the first box. Anything added to the first box after this point, will be added below the row created by this box.

Lines 28 to 39 creates a label, a text field, a checkbox and a button, which are all laid in a row, using the horizontal box above. For the search button, we’re setting a callback which will be called when the user clicks it. The callback is just a method in the same class, called findMatches. It doesn’t take any argument.

A tree with 6 columns is then created from lines 43 to 50. The tree (which is just a plain list of rows) is set up by adding as many columns are desired, with their types, captions, default width and a flag telling whether the column is editable or not. After the columns are added, the end_columns() method must be called.

Finally, another row is added, starting from line 53. This row contains a Cancel and OK (Create FKs) buttons. Instead of add(), the add_end()method from Box is used, so that the buttons are laid out from right to left, instead of starting from the left to the right.

At last, the default size of the window is set.

This image shows rectangles around each of the boxes used to lay out the dialog.

The Whole Thing

To make this a plugin, there’s a few more bureaucratic lines of code that must be added. This code is described in our previous plugin tutorials and in the documentation.

The full plugin code contains the complete implementation. To install it, save it as relationship_create_grt.py and Install it from the Scripting -> Install Plugin/Module… menu item. After restarting Workbench, a new item called Create Relationships from Columns will appear in the Plugins -> Catalog menu. Note: the plugin requires MySQL Workbench 5.2.29

 

UPDATE: Fixed link to full plugin code