MySQL Workbench: PHP development helper plugins

In the new MySQL Workbench 5.2.35, a plugin that will be of interest to PHP developers, both experienced and newbies, has been added.
The plugin contains a couple of functions that allows you to create PHP code straight out of your current work in the Workbench SQL Editor, ready to be pasted to your PHP program.

Screenshot of SQL Editor with PHP Plugins

Copy as PHP Code (Connect to Server) This first plugin will take the parameters from your currently open connection to MySQL and create PHP code to connect to it.

$host="p:localhost";
$port=3306;
$socket="/var/mysql/mysql.sock";
$user="root";
$password="";
$dbname="";

$con = new mysqli($host, $user, $password, $dbname, $port, $socket)
	or die ('Could not connect to the database server' . mysqli_connect_error());

//$con->close();

Not a big deal, but saves some typing for getting something going quickly.

Copy as PHP Code (Iterate SELECT Results) This one will get your query and generate code to execute it and then iterates through the results. It will also parse the SQL and substitute any SQL @variables you use in it with PHP variables that will be bound to the statement before execution. Resultset rows will be bound to PHP variables with the same name as the field (or alias, if your query specified one).

So for the following query:

set @before_date = '1990-01-01';
set @after_date = '1980-01-01';

SELECT 
    emp_no, first_name, last_name, hire_date
FROM
    `employees`.`employees`
WHERE
    `hire_date` < @before_date AND `hire_date` > @after_date;

you would get this back:

$query = "SELECT      emp_no, first_name, last_name, hire_date FROM     `employees`.`employees` WHERE     `hire_date` < ? AND `hire_date` > ?";
$before_date = '';
$after_date = '';

$stmt->bind_param('ss', $before_date, $after_date); //FIXME: param types: s- string, i- integer, d- double, b- blob

if ($stmt = $con->prepare($query)) {
    $stmt->execute();
    $stmt->bind_result($emp_no, $first_name, $last_name, $hire_date);
    while ($stmt->fetch()) {
        //printf("%s, %s, %s, %s\n", $emp_no, $first_name, $last_name, $hire_date);
    }
    $stmt->close();
}

This should be enough for letting you quickly create a PHP program for doing something with the results of a parameterized query, straight out from your normal SQL development workflow and as a bonus, and be safe from injection bugs as a bonus.

Adding your own plugins

The plugins are simple, but more along these lines will be added in the future. And, more importantly, you can modify it to support your own needs. Here’s how:

First of all, find where’s the plugin file. The filename is code_utils_grt.py and you should be able to find it searching in the WB installation folder. To have your own version, rename it to something else like my_code_utils_grt.py, change a few identifiers so it won’t collide with the original built-in plugin as described below and use Scripting -> Install Plugin/Module… to install it to the correct place.

You can use the plugins there as a starting point for your own or modify them to match your coding style, choice of PHP driver etc

The important things you need to change in the plugin copy before installing are:

  1. the plugin name from CodeUtils to something else.
  2. ModuleInfo = DefineModule(name= "CodeUtils", author= "Oracle Corp.", version="1.0")
    
  3. the individual plugin names and identifiers (or just comment them out) and maybe the function names.
  4. @ModuleInfo.plugin("wb.sqlide.copyAsPHPConnect", caption= "Copy as PHP Code (Connect to Server)", input= [wbinputs.currentSQLEditor()], pluginMenu= "SQL/Utilities")
    @ModuleInfo.export(grt.INT, grt.classes.db_query_Editor)
    def copyAsPHPConnect(editor):
    

    The first parameter in @ModuleInfo.plugin is the plugin name and the 2nd is the caption. You can leave everything else, especially the metadata about the input parameters etc.

Here’s a sample plugin that you can use as a template. It’s stripped to the basics for easier understanding:

@ModuleInfo.plugin("wb.sqlide.copyAsPHPQuery", caption= "Copy as PHP Code (Run Query)", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def copyAsPHPQuery(qbuffer):
    sql= qbuffer.selectedText or qbuffer.script

    text = 'print "the query is %s\n"' % sql.replace('"', r'\"')

    mforms.Utilities.set_clipboard_text(text)

    mforms.App.get().set_status_text("Copied PHP code to clipboard")
    return 0

MySQL Workbench: Introducing Utilities

MySQL has the well earned reputation for ease-of-use and “15-minutes-to-success”, since we continually focus making the server easy to use. MySQL Workbench provides the visual tools for database design, development, and administration. However, many DBAs prefer using the command-line, and there are many tasks that require the creation scripts for doing the job.

To make it easier to work with the server, the latest release of the MySQL Workbench—version 5.2.31—contain a set of Python scripts intended to make the life easier for DBAs by providing easy-to-use utilities for common tasks, which were introduced in the blog MySQL Workbench: Utilities. The set currently consists of just a few utilities, but will expand over time.

The utilities available in the Workbench are:

mysqldbcopy
Copy databases between servers.
mysqldbexport
Export databases to a file in different formats, including: SQL, comma-separated files, and tab-separated files (and some more).
mysqldbimport
Import object definitions and/or data from a file—in different formats, similar to mysqldbexport—into a database.
mysqlindexcheck
Check for redundant or duplicate indexes on a list of tables or databases. It can also generate DROP statements to
remove redundant indexes.
mysqlmetagrep

Search MySQL servers for objects containing fields matching a pattern.

mysqlprocgrep

Search MySQL servers for processes matching a pattern and perform actions.

mysqlreplicate

Setup replication between two servers.

mysqlserverclone

Start a new instance of a server to experiment with. This is used to test the utilities, but can be used whenever you need to set up a scratch server to test something.
mysqluserclone

Copy a MySQL user to one or more new users on another server

Finding stuff on servers with ease

In order to search for various things on servers—for example, searching objects and processes—there are two commands that can be used: mysqlprocgrep and mysqlmetagrep. The name “grep” is borrowed from Unix where the grep(1) commands that can be used to search inside files, but in this case you can search among processes and inside metadata on a server.

Example: searching processes using mysqlprocgrep

With mysqlprocgrep you can find all processes that match certain conditions and either just print them out, or kill either the connection or the query. So, for instance, to see all connections that have been idle for more than 2 minutes, we can use:

$ mysqlprocgrep --server=root:password@localhost --match-command=sleep --age=+2m
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+
| Connection             | Id  | User  | Host       | Db    | Command  | Time  | State  | Info  |
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+
| root:*@localhost:3306  | 39  | mats  | localhost  | None  | Sleep    | 248   |        | None  |
+------------------------+-----+-------+------------+-------+----------+-------+--------+-------+

In the example above, I would especially like you to note how the server connection information is provided. To provide information on what server to connect to and how, we have introduced a new syntax to represent the connection information which we call a connection specification:

user:password@host:port:socket

In the connection specification, the password, port, and socket are optional and can be excluded, which will make the default being used instead.

This is used instead of having separate switches (for example, –host and –user) that the server client programs are using. In addition to being easier to write, it also means that it is possible to provide multiple servers to command (where it makes sense, of course). You might be interested in knowing that both mysqlprocgrep and mysqlmetagrep accepts multiple servers.

If you now want to kill these idle connections, you can just add the –kill-connection option to the command, and the connection of all matching processes will be killed.

$ mysqlprocgrep --server=root:password@localhost \
> --match-command=sleep --age=+2m

In a similar way, if you have a long-running update from a special user (say, www-data), you can kill all the queries in one shot using the command:

$ mysqlprocgrep --server=root:password@localhost \
> --match-user=www-data --match-command=query    \
> --match-state=updating --age=+1m               \
> --kill-query

Example: finding objects using mysqlmetagrep

At times, you also find some odd reference to a column or index, you’re not quite sure, or you want to find out what objects are using a column named ‘db’. In those cases, mysqlmetagrep comes in handy.

The utility is used to find all objects that contain a field that matches the provided pattern. The pattern can be given either as a SQL simple pattern as defined by the SQL standard (this is what you usually use with LIKE), or using POSIX regular expressions (which is what you usually use with REGEXP in SQL). The default is to use the SQL simple pattern. So, to search for any objects having a column ‘host’, we can
use the command:

$ mysqlmetagrep --server=root:password@localhost --pattern=host --search=column
+------------------------+--------------+---------------+---------------------+-------------+----------+
| Connection             | Object Type  | Object Name   | Database            | Field Type  | Matches  |
+------------------------+--------------+---------------+---------------------+-------------+----------+
| root:*@localhost:3306  | TABLE        | PROCESSLIST   | information_schema  | COLUMN      | HOST     |
| root:*@localhost:3306  | TABLE        | columns_priv  | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | db            | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | host          | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | host          | mysql               | TABLE       | host     |
| root:*@localhost:3306  | TABLE        | procs_priv    | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | servers       | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | tables_priv   | mysql               | COLUMN      | Host     |
| root:*@localhost:3306  | TABLE        | user          | mysql               | COLUMN      | Host     |
+------------------------+--------------+---------------+---------------------+-------------+----------+

Since the SQL simple patterns are default, this require an exact match and you will only find objects with columns exactly named ‘host’. To find all column containing the word ‘host’, you have to add wildcards to the pattern:

$ mysqlmetagrep --server=root:password@localhost --pattern=%host% --search=column
+------------------------+--------------+---------------+---------------------+-------------+------------+
| Connection             | Object Type  | Object Name   | Database            | Field Type  | Matches    |
+------------------------+--------------+---------------+---------------------+-------------+------------+
| root:*@localhost:3306  | TABLE        | PROCESSLIST   | information_schema  | COLUMN      | HOST       |
| root:*@localhost:3306  | TABLE        | columns_priv  | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | db            | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | general_log   | mysql               | COLUMN      | user_host  |
| root:*@localhost:3306  | TABLE        | host          | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | procs_priv    | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | servers       | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | slow_log      | mysql               | COLUMN      | user_host  |
| root:*@localhost:3306  | TABLE        | tables_priv   | mysql               | COLUMN      | Host       |
| root:*@localhost:3306  | TABLE        | user          | mysql               | COLUMN      | Host       |
+------------------------+--------------+---------------+---------------------+-------------+------------+

Creating and configuring servers with ease

There are three utilites that I will just mention briefly, because they are not very complicated to use: mysqlserverclone, mysqlreplicate, and mysqluserclone.

To create a scratch servers using mysqlserverclone for testing something, it is as easy as:

$ mysqlserverclone --server=root:password@localhost \
> --new-data=/tmp/data
# WARNING: Root password for new instance has not been set.
# Cloning the MySQL server running on localhost.
# Creating new data directory...
# Configuring new instance...
# Locating mysql tools...
# Setting up empty database and mysql tables...
# Starting new instance of the server...
# Testing connection to new instance...
# Success!
#...done.

It will create a new server from the original, copy the existing databases, and and start the server. You can supply a new port using the –new-port, but if you do not do that, it will pick the default port 3307.

If you want to set up replication quickly and easily, you can do that using mysqlreplicate:

$ mysqlreplicate --master=root:password@localhost \
> --slave=root@localhost:3307 --rpl-user=repl_user:xyzzy
# master on localhost: ... connected.
# slave on localhost: ... connected.
# Checking for binary logging on master...
# Setting up replication...
# ...done.

When setting up replication, the mysqlreplicate does some basic checking to ensure that replication will work. It checks that there is a server ID assigned and also checks that binary logging is enabled. If something is not right, it will abort the setup and report error.

The last utility that is useful in setting servers up is mysqluserclone. The utility is used to create new users based on an existing one. So, to create a new user ‘chuck’ with password ‘xyzzy’ on localhost from an existing user ‘mats@localhost’,
you can use the command:

$ mysqluserclone --source=root:password@localhost \
> --destination=root:password@localhost \
> mats@localhost chuck:xyzzy@localhost
# Source on localhost: ... connected.
# Destination on localhost: ... connected.
# Cloning 1 users...
# Cloning mats@localhost to user chuck:xyzzy@localhost
# ...done.

Moving stuff around with ease

There are three utilities that can be used to move data around: mysqldbcopy, mysqldbexport, and mysqldbimport.

With mysqldbcopy, you can copy a database from one server to another, or several databases from one server to another. When copying a database, it not only copies the table definitions, but all associated objects such as triggers, events, routines, and also database-level grants.

With mysqldbexport you can export one or more databases into various formats, including (but not limited to) pure SQL, comma- and tab-separated values, and also a nice human-readable table.

With mysqldbimport you can import data in files into a database. In contast to using LOAD DATA INFILE, this will generate the INSERT statements to inject the data into the server.

The road ahead

The current set of utilities are just a small start, and we expect more utilities to be added over time and also improve on the existing utilities, so if you want to help, you can do that by:

We are very interested in feedback of any form—bug reports, suggestions for new utilities, suggestions for improving the existing utilities—so if you are a MySQL Expert DBA:

  • Let us know how to improve our utilities
  • Send us suggestions or ideas for new utilities
  • Write your own utilities
  • Contribute patches and/or new utilities

and if you are a Python programmer and/or developer:

  • Let us know how to be more Pythonic
  • Suggest improvements of the code
  • Build and/or contribute additional utilities on top of ones we provide

MySQL Workbench: Utilities

MySQL Workbench is an extensible tool for database design, development, and administration – with support for python scripting and plugins, that can automate basic tasks.

Starting with MySQL Workbench 5.2.31, and with MySQL designers, developers, and administrators in mind, we’ve added a new plugin called MySQL Utilities. The MySQL Utilities is a set of python scripts that automate basic tasks. And as MySQL Workbench is about helping design, develop and administer databases, we’ve put the MySQL Utilities at your fingertips so you can quickly get to them and get your job done. Users will benefit from proven scripts, leverage the testing and ideas of the community and the experts at MySQL, and avoid having to reinvent common scripts…

Lets go on a quick tour…

The MySQL Utilities Plugin starts a shell that contains a set of python scripts that perform tasks such as copying, exporting, and importing databases, finding database objects or processes, and cloning servers and users.

To start MySQL Utilities, start Workbench and click on Plugin -> Start Shell for MySQL Utilities or click the Utilities shortcut icon from the Plugins Panel at the top of the Home tab.

For usage information just type the command name without any parameters.
For documentation of each utility look here.

For more information about MySQL Workbench: Utilities check out Mats’ post.

MySQL Workbench Plugin: Auto-Create Foreign Keys

To automatically create Foreign Key relationships is a typical use case for developers working with the MyISAM storage engine. This has been a popular topic on the Blog and forums so we’re revisiting it here – with a new and improved plugin written in Python.

While the InnoDB storage engine supports foreign keys, MyISAM doesn’t, but developers often pick it for various reasons and leave the application to handle relationships itself. MySQL Workbench can be used to reverse engineer a database to a model to better visualize or maintain it. But since MyISAM doesn’t have foreign keys, databases that use it will be lacking a very important part of it’s structure in the diagrams. You can link the columns manually, using the relationship tool to link columns, but you can also automate that. Databases are usually created so that columns that represent relationships have names that follow some kind of convention or pattern. For example, a city table that is related to a country table, may have a column called country_id, used as the foreign key. The pattern there would be something like <table_name>_id. We can find all such pairs of columns between potential foreign keys and primary keys and create a foreign key for the tables.

There are two core routines needed by this implementation:

  • the first is to find candidate columns. That is, columns that could be foreign keys that reference primary keys of other tables, according to some pattern.
  • the second is the code to actually create the foreign keys from the possible columns found previously.

Look for Candidates

The following is the code to find candidate columns:

def get_fk_candidate_list(schema, fk_name_format, match_types=False):
    candidate_list = []
    possible_fks = {}
    # create the list of possible foreign keys out of the list of tables
    for table in schema.tables:
        if table.primaryKey and len(table.primaryKey.columns) == 1: # composite FKs not supported
            format_args = {'table':table.name, 'pk':table.primaryKey.columns[0].name}
            fkname = fk_name_format % format_args
            possible_fks[fkname] = table

    # go through all tables in schema again, this time to find columns that seem to be a fk
    for table in schema.tables:
        for column in table.columns:
            if possible_fks.has_key(column.name):
                ref_table = possible_fks[column.name]
                ref_column = ref_table.primaryKey.columns[0].referencedColumn
                if ref_column == column:
                    continue
                if match_types and ref_column.formattedType != column.formattedType:
                    continue

                candidate_list.append((table, column, ref_table, ref_column))
    return candidate_list

First, it will go through the list of all tables in the given schema and create a dictionary of possible foreign key column names, according to a format string provided by the user. The format string has the %(table)s and %(pk)s variables replaced with the table name and primary key column name.

With the dictionary of possible foreign key names at hand, it then goes through all columns of all tables looking for any column name that is in the dictionary. If a match is found, a tuple of table, column, referenced table and referenced column names are added to a list of candidates. If the match_types flag is True, it will also check if the column types match and discard anything that doesn’t.

Create Foreign Keys

With the list of candidate columns, we can create a foreign key object from the table column to its referenced column.

for table, column, ref_table, ref_column in candidates:
    fk = table.createForeignKey(ref_column.name+"_fk")
    fk.referencedTable = ref_table
    fk.columns.append(column)
    fk.referencedColumns.append(ref_column)

According to the db_Table documentation, table objects have a convenient createForeignKey method, which takes the foreign key name as an argument, and returns a new db_ForeignKey object added to the table. The foreign key is empty, so we set its referencedTable field and add the column/referenced column pair to the columns and referencedColumns lists, respectively.

Adding a GUI

Now, for a fancier version, we will create a dialog that takes the naming pattern from the user, shows the list of candidates and creates the foreign keys when a button is clicked:

This GUI version uses the internal mforms toolkit. It provides a native interface in any of the supported platforms. See the documentation for it here.

Here is the part of the code that creates the UI. You can use it as a template for your own plugin dialogs. Go to the end of the post for the full plugin code.

import mforms

class RelationshipCreator(mforms.Form):
  def __init__(self):
    mforms.Form.__init__(self, None, mforms.FormNone)

    self.set_title("Create Relationships for Tables")

    box = mforms.newBox(False)
    self.set_content(box)
    box.set_padding(12)
    box.set_spacing(12)

    label = mforms.newLabel(
"""This will automatically create foreign keys for tables that match
a certain column naming pattern, allowing you to visualize relationships
between MyISAM tables.

To use, fill the Column Pattern field with the naming convention used for
columns that are meant to be used as foreign keys. The %(table)s and %(pk)s
variable names will be substituted with the referenced table values.""")
    box.add(label, False, True)

    hbox = mforms.newBox(True)
    hbox.set_spacing(12)
    box.add(hbox, False, True)

    label = mforms.newLabel("Column Pattern:")
    hbox.add(label, False, True)
    self.pattern = mforms.newTextEntry()
    hbox.add(self.pattern, True, True)
    self.matchType = mforms.newCheckBox()
    self.matchType.set_text("Match column types")
    hbox.add(self.matchType, False, True)
    self.matchType.set_active(True)
    search = mforms.newButton()
    search.set_text("Preview Matches")
    search.add_clicked_callback(self.findMatches)
    hbox.add(search, False, True)

    self.pattern.set_value("%(table)s_id")

    self.candidateTree = mforms.newTreeView(mforms.TreeShowHeader)
    self.candidateTree.add_column(mforms.StringColumnType, "From Table", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Column", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Type", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "To Table", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Column", 100, False)
    self.candidateTree.add_column(mforms.StringColumnType, "Type", 100, False)
    self.candidateTree.end_columns()
    box.add(self.candidateTree, True, True)

    hbox = mforms.newBox(True)
    hbox.set_spacing(12)
    self.matchCount = mforms.newLabel("")
    hbox.add(self.matchCount, False, True)
    self.cancelButton = mforms.newButton()
    self.cancelButton.set_text("Cancel")
    hbox.add_end(self.cancelButton, False, True)
    self.okButton = mforms.newButton()
    self.okButton.set_text("Create FKs")
    hbox.add_end(self.okButton, False, True)
    self.okButton.add_clicked_callback(self.createFKs)
    box.add(hbox, False, True)

    self.set_size(700, 600)

The dialog is implemented as a subclass of the mforms.Form class. That is the class for creating a window.

Line 5 calls the __init__ method of mforms.Form. Nothing fancy here, as we just want a plain normal window. Line 7 which follows, sets the title of the window.

Line 9, mforms.newBox(False) is used to create a box layouter that is “not horizontal” (ie, vertical). This is used to layout controls that are added to it from top to bottom, in a single column. Line 10 makes the window display this box as its first control. Anything else you want displayed in the window must be added to this box, either as a direct child or nested in other layouters children of this one.
Lines 11 and 12 set a padding around the box and a spacing between each item inside it, so we have a not so cluttered appearance in our dialog.

Line 14 creates a text label control with some rather lengthy description text, which is then added to the box we created above. The 2nd argument to the add method tells the layouter to not expand the added control. That is, the label will allocate as much space as it needs to show all its contents. If it was set to True, it would instead use all the space left in its container. The 3rd argument tells the layouter to fill the space allocated for it with the control. Since expand is False in this case, this won’t make much difference, but if it was True, it would toggle whether the label should have the same size as the space allocated for it or not. Note the difference between allocated space and actually used space.

Line 24 creates another box, this time a horizontal one, which is then added to the previously created vertical box. Anything added to this box will be laid out as a single row inside the first box. Anything added to the first box after this point, will be added below the row created by this box.

Lines 28 to 39 creates a label, a text field, a checkbox and a button, which are all laid in a row, using the horizontal box above. For the search button, we’re setting a callback which will be called when the user clicks it. The callback is just a method in the same class, called findMatches. It doesn’t take any argument.

A tree with 6 columns is then created from lines 43 to 50. The tree (which is just a plain list of rows) is set up by adding as many columns are desired, with their types, captions, default width and a flag telling whether the column is editable or not. After the columns are added, the end_columns() method must be called.

Finally, another row is added, starting from line 53. This row contains a Cancel and OK (Create FKs) buttons. Instead of add(), the add_end()method from Box is used, so that the buttons are laid out from right to left, instead of starting from the left to the right.

At last, the default size of the window is set.

This image shows rectangles around each of the boxes used to lay out the dialog.

The Whole Thing

To make this a plugin, there’s a few more bureaucratic lines of code that must be added. This code is described in our previous plugin tutorials and in the documentation.

The full plugin code contains the complete implementation. To install it, save it as relationship_create_grt.py and Install it from the Scripting -> Install Plugin/Module… menu item. After restarting Workbench, a new item called Create Relationships from Columns will appear in the Plugins -> Catalog menu. Note: the plugin requires MySQL Workbench 5.2.29

 

UPDATE: Fixed link to full plugin code

MySQL Workbench Plugin: Execute Query to Text Output

In MySQL Workbench 5.2.26 a new query execution command is available, where query output is sent as text to the text Output tab of the SQL Editor. Some MySQL Workbench users liked the “Results to Text” option available in Microsoft SQL Server Management Studio. Cool thing is with a few lines of Python we implemented this command using the SQL Editor scripting API.

For full documentation on scripting and plugin development, refer to the documentation pointers page.

In this post, you will learn:

  • Python script for implementing “Results to Text”
  • How you can customize the script to deliver your own customized results format command.

Execute Query to Text (accessible from Query -> Execute (All or Selection) to Text), will execute the query you typed in textual form into the Output tab in the SQL Editor. The output is similar to that of the MySQL command line client and can be copy/pasted as plain text. But the command line client has a different, interesting output format, activated through the –vertical command line option. It changes the output from a tabular to a form-like format, where row values are displayed as column name/value pairs:

We will try emulating that format using our modified plugin.

The Original Plugin Code

The goals for the original plugin shipped with Workbench were:

  • Provide an alterntive to the Results Grid output
  • Provide MySQL CLI and MS SQL Server Studio Text Formatted results
  • Add “Execute to Text” to the Query Menu

You can locate the original code for the plugin we want to modify in the sqlide_grt.py file, in the MySQL Workbench distribution (in Windows it will be in the modules directory in the WB folder, in MacOS X it will be in MySQLWorkbench.app/Contents/PlugIns and in Linux, in /usr/lib/mysql-workbench/modules).

# import the wb module
from wb import *
# import the grt module
import grt
# import the mforms module for GUI stuff
import mforms

# define this Python module as a GRT module
ModuleInfo = DefineModule(name= "SQLIDEUtils", author= "Oracle Corp.", version="1.0")

@ModuleInfo.plugin("wb.sqlide.executeToTextOutput", caption= "Execute Query Into Text Output", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsText(qbuffer):
  editor= qbuffer.owner
  sql= qbuffer.selectedText or qbuffer.script
  resultsets= editor.executeScript(sql)
  editor.addToOutput("Query Output:\n", 1)
  for result in resultsets:
    editor.addToOutput("> %s\n\n" % result.sql, 0)
    line= []
    column_lengths=[]
    ncolumns= len(result.columns)
    for column in result.columns:
      line.append(column.name + " "*5)
      column_lengths.append(len(column.name)+5)

    separator = []
    for c in column_lengths:
        separator.append("-"*c)
    separator= " + ".join(separator)
    editor.addToOutput("+ "+separator+" +\n", 0)

    line= " | ".join(line)
    editor.addToOutput("| "+line+" |\n", 0)

    editor.addToOutput("+ "+separator+" +\n", 0)

    rows = []
    ok= result.goToFirstRow()
    while ok:
      line= []
      for i in range(ncolumns):
        value = result.stringFieldValue(i)
        if value is None:
          value = "NULL"
        line.append(value.ljust(column_lengths[i]))
      line= " | ".join(line)
      rows.append("| "+line+" |\n")
      ok= result.nextRow()
    # much faster to do it at once than add lines one by one
    editor.addToOutput("".join(rows), 0)

    editor.addToOutput("+ "+separator+" +\n", 0)
    editor.addToOutput("%i rows\n" % len(rows), 0)

  return 0

Lines 1 to 6 import some Workbench specific Python modules:

  • wb, which contains various utility functions for creating plugins;
  • grt, for working with Workbench objects and interfacing with it and
  • mforms, for creating GUIs.
@ModuleInfo.plugin("wb.sqlide.executeToTextOutput", caption= "Execute Query Into Text Output", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsText(qbuffer):

@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer) declares the return type (grt.INT by convention) and argument types of the plugin function defined further down. In the line above it, a unique identifier for the plugin is given, followed by a default caption to use in places such as menus, the input values taken by the plugin and the location in the Plugins menu where it should be placed.

The plugin executes the current query, so the argument it requests is wbinputs.currentQueryBuffer() (the selected query buffer tab), which has a type of db_query_QueryBuffer. You can read more about the available types and inputs in the relevant documentation.

The code itself is straightforward:

  1. it takes the query code,
  2. executes it through the SQL Editor object that owns the query buffer and
  3. renders the output in the text Output tab.

Custom Plugin

The goals for the custom plugin are:

  • Provide an custom alterntive to the Results Grid output
  • Provide text results column name/value pairs formatted output
  • Add “Execute to Vertical Formatted Text” to the Query Menu

To create the modified version, we can copy the above plugin and make some changes.

  1. copy the plugin file from the Workbench plugins directory to some folder of yours (eg your home directory or Desktop);
  2. rename it to verticalquery_grt.py;
  3. open it in some text editor of your liking.

First, we change the module info:

ModuleInfo = DefineModule(name= "QueryToVerticalFormat", author= "WB Blog", version="1.0")

The plugin arguments are the same, so we only need to update its identifier and name:

@ModuleInfo.plugin("wbblog.executeToTextOutputVertical", caption= "Execute Query Into Text Output (vertical)", input= [wbinputs.currentQueryBuffer()], pluginMenu= "SQL/Utilities")
@ModuleInfo.export(grt.INT, grt.classes.db_query_QueryBuffer)
def executeQueryAsTextVertical(qbuffer):

You can see the body of the function in the complete sample module file here.

Trying it Out

To install the module, you can use the Scripting -> Install Module/Script File… menu command. Select the newly created plugin file (verticalquery_grt.py) from the file browser and click Open.
Once installed, restart Workbench and run it:

You can download the entire, modified sample plugin here

MySQL Workbench Administrator Plugin

Recently we have released Workbench 5.2.4 Alpha. This version has some new features, and amongst them there is Workbench Administrator plugin or WBA plugin for short.

The plugin aims to ease the managing process of server instances. What we offer with WBA is a simple way to check status, configure and control one server instance. Some parts will resemble discontinued MySQL Administrator.

And here is a short summary of what we will have in WBA:

  • Start/Stop server
  • Edit server configuration (my.cnf)
  • Manage user accounts
  • Monitor user connections
  • Status/Server variables browsing
  • Log browsing
  • Dump/Restore

In this alpha we have implemented a subset, which targets local usage only. Remote management and administration is upcoming. The subset includes:

  • Add new Server Profiles
  • Start/Stop the Server
  • Edit Server Configuration (my.cnf)
  • Monitor user connections
  • Status/Server variables browsing

To launch the plugin you need to run Workbench 5.2.4, register a server instance by double-clicking the Manage Icon. Once the server instance is registered you can double click the icon of the server instance. But for the moment let’s dive into what is important to specify at the server instance creation stage.

Our target is ‘System Profile’ tab of ‘Manage Server Instances’ form. Mostly, the whole process should be simple selecting the right profile. Nevertheless, for now (I hope you still remember it is an alpha release), one of the vitally important things is to check location of ini or cnf file. Also you may check commands used to start, stop and query server status. For example OS X profile has that line to check for status: ps -cxa | grep mysqld.

All the commands specified in these entries are run as a sub-processes and sometimes they may require super-user privileges, for that purpose there is ‘use sudo’ checkbox. Normally, command run has exit code which WBA checks for to determine success of failure of the command.

When adding a new Server Profile you also need to select a Connection to the server (You should create that connection in the Connection Management dialog first). That will allow you to browse the Connections and Server Variables in the WBA. It will also be used to update the Server Status periodically every second and show the current state (that is in next alpha).

The way to change the my.cnf/my.ini config file is pretty straightforward. First, if you want a value to be written to the config file, you have to enable the checkbox left to each option. After all changes have been made, you can either press Apply or Discard button. Apply will display a Dialog which shows several things: what has changed, resulting file, commands which are used to save file. If changes are correct, press Apply on the form. That is it.

Stay tuned for the updates on WBA topic.

MySQL Workbench speaks SQLite

As many of you already know, MySQL Workbench offers powerful scripting support and a plugin interface that allows anybody to extend the application or add new functionality.

The latest addition to the WB plugin family has been written by Thomas Henlich. It features SQLite compatible SQL export of your Workbench models.

Download it from his blog and please provide feedback on the forums.

We are working on improving the scripting support in WB 5.2 and finally adding proper documentation so even more people can contribute.

Python Scripting in Workbench

Updated: 2010-8-11 updated sample code to reflect changes in plugin registration API

Python support has been added to the latest version of MySQL Workbench.

In addition to Lua, you can now write scripts and modules or interact with the GRT shell using the Python language. The integration allows you to use GRT objects and modules mostly in the same way you would do with normal Python objects. The built-in grt module contains everything related to the GRT that’s exposed to Python, including:

  • custom types for GRT lists, dicts and objects
  • wrappers for GRT classes, that can be used and instantiated as a normal Python class;
  • wrappers for registered GRT modules, that can be used like normal modules;
  • a reference to the root node of the GRT globals tree;

You can inspect these objects with the standard dir() command and in some cases with help(), to access the built-in documentation (although still incomplete).

Below is a sample shell session demonstrating the Python support:

>>> import grt

>>> dir(grt)
[‘DICT’, ‘DOUBLE’, ‘Dict’, ‘Function’, ‘INT’, ‘LIST’, ‘List’, ‘Method’, ‘Module’, ‘OBJECT’, ‘Object’, ‘STRING’, ‘__GRT__’, ‘__doc__’, ‘__name__’, ‘classes’, ‘get’, ‘modules’, ‘root’, ‘send_error’, ‘send_info’, ‘send_output’, ‘send_warning’, ‘softspace’, ‘write’]

>>> grt.classes
<module ‘grt.classes’ (built-in)>

>>> dir(grt.classes)

[‘GrtLogEntry’, ‘GrtLogObject’, ‘GrtMessage’, ‘GrtNamedObject’, ‘GrtObject’, ‘GrtStoredNote’, ‘GrtVersion’, ‘__doc__’, ‘__name__’, ‘app_Application’, ‘app_CommandItem’, ‘app_CustomDataField’, ‘app_Document’, ‘app_DocumentInfo’, ‘app_DocumentPlugin’, ‘app_Info’, ….

>>> for schema in grt.root.wb.doc.physicalModels[0].catalog.schemata:
… print schema.name

mydb
sakila

>>> table=grt.root.wb.doc.physicalModels[0].catalog.schemata[1].tables[0]

>>> dir(table)
[‘__callmethod__’, ‘__class__’, ‘__delattr__’, ‘__dict__’, ‘__doc__’, ‘__getattribute__’, ‘__grtclassname__’, ‘__hash__’, ‘__init__’, ‘__module__’, ‘__new__’, ‘__reduce__’, ‘__reduce_ex__’, ‘__repr__’, ‘__setattr__’, ‘__str__’, ‘__weakref__’, ‘addColumn’, ‘addIndex’, ‘addPrimaryKeyColumn’, ‘avgRowLength’, ‘checksum’, ‘columns’, ‘comment’, ‘commentedOut’, ‘connection’, ‘connectionString’, ‘createDate’, ‘createForeignKey’, ‘customData’, ‘defaultCharacterSetName’, ‘defaultCollationName’, ‘delayKeyWrite’, ‘foreignKeys’, ‘indices’, ‘inserts’, ‘isDependantTable’, ‘isForeignKeyColumn’, ‘isPrimaryKeyColumn’, ‘isStub’, ‘isSystem’, ‘isTemporary’, ‘lastChangeDate’, ‘maxRows’, ‘mergeInsert’, ‘mergeUnion’, ‘minRows’, ‘modelOnly’, ‘name’, ‘nextAutoInc’, ‘oldName’, ‘owner’, ‘packKeys’, ‘partitionCount’, ‘partitionDefinitions’, ‘partitionExpression’, ‘partitionType’, ‘password’, ‘primaryKey’, ‘raidChunkSize’, ‘raidChunks’, ‘raidType’, ‘removeColumn’, ‘removeForeignKey’, ‘removeIndex’, ‘removePrimaryKeyColumn’, ‘rowFormat’, ‘subpartitionCount’, ‘subpartitionExpression’, ‘subpartitionType’, ‘tableDataDir’, ‘tableEngine’, ‘tableIndexDir’, ‘temp_sql’, ‘temporaryScope’, ‘triggers’]

>>> table.name
‘actor’

>>> table.name= table.name.upper()

>>> table.name
‘ACTOR’

>>> table.columns[1].formattedType(1)
‘VARCHAR(45)’

>>> isinstance(table, grt.classes.db_mysql_Table)
True

>>> dir(grt.modules)
[‘DbMySQL’, ‘DbUtils’, ‘MySQLDbDiffReportingModule’, ‘MySQLDbModule’, ‘MySQLEditorsModule’, ‘MySQLSchemaReportingModule’, ‘MysqlSqlFacade’, ‘PluginManager’, ‘PropelExport’, ‘PyWbUtils’, ‘WbEditorsModule’, ‘WbModel’, ‘WbModuleValidation’, ‘WbModuleValidationMySQL’, ‘WbMysqlImport’, ‘Workbench’, ‘__doc__’, ‘__name__’]

>>> from grt.modules import Workbench
>>> Workbench.copyToClipboard(“HELLO WORLD”)
1L

Modules and Plugins

Writing Workbench modules and plugins is very simple task in Python. For modules (collections of routines available to other scripts and modules, from any language); all you need to do is to import the wb module and call DefineModule() with a few arguments. Defining a function you wrote to be exported can be done using the @export decorator from the object returned by DefineModule().

To make your module also export plugins (a function that is exposed to the user in menus or others ways), the @plugin can be used in addition to @export.

Here’s a sample Python module, exporting a function which also works as a Plugin, callable from Plugins -> Catalog -> . It will simply dump a list of all columns from all tables in all schemata in your model to the Output pane.

# import the wb module, must be imported this way for the automatic module setup to work

from wb import *

# import the grt module

import grt

# define this Python module as a GRT module

ModuleInfo=DefineModule(name= “PyWbUtils”, author= “Sun Microsystems Inc.”, version=“1.0”)

# this is just a function used by the plugin, it’s not exported

def printTableLine(fields, filler= ” “):

print “|”,

for text, size in fields:

print text.ljust(size, filler), “|”,

print

# @wbexport makes this function be exported by the module and also describes the return and

# argument types of the function

# @wbplugin defines the name of the plugin to “wb.catalog.util.dumpColumns”, sets the caption to be

# shown in places like the menu, where to take input arguments from and also that it should be included

# in the Catalog submenu in Plugins.

@ModuleInfo.plugin(“wb.catalog.util.dumpColumns”, caption= “Dump All Table Columns”, input= [wbinputs.currentCatalog()], pluginMenu= “Catalog”)

@ModuleInfo.export(grt.INT, grt.classes.db_Catalog)

def printAllColumns(catalog):

lines= []

schemalen= 0

tablelen= 0

columnlen= 0

typelen= 0

for schema in catalog.schemata:

schemalen= max(schemalen, len(schema.name))

for table in schema.tables:

tablelen= max(tablelen, len(table.name))

for column in table.columns:

columnlen= max(columnlen, len(column.name))

typelen= max(typelen, len(column.formattedType(1)))

lines.append((schema.name, table.name, column.name, column.formattedType(1)))

printTableLine([(“-“, schemalen),
(“-“, tablelen),
(“-“, columnlen),
(“-“, typelen)], “-“)

printTableLine([(“Schema”, schemalen),
(“Table”, tablelen),
(“Column”, columnlen),
(“Type”, typelen)])

printTableLine([(“-“, schemalen),
(“-“, tablelen),
(“-“, columnlen),
(“-“, typelen)], “-“)

for s,t,c,dt in lines:

printTableLine([(s, schemalen), (t, tablelen), (c, columnlen), (dt, typelen)])

printTableLine([(“-“, schemalen),
(“-“, tablelen),
(“-“, columnlen),
(“-“, typelen)], “-“)

span style=”color: #cdbe70″>print len(lines), “columns printed”

return 0

Truncated output for the sakila model:

	| ------ | ------- | ----------- | ------------- |
	| Schema | Table   | Column      | Type          |
	| ------ | ------- | ----------- | ------------- |
	| sakila | actor   | actor_id    | SMALLINT      |
	| sakila | actor   | first_name  | VARCHAR(45)   |
	| sakila | actor   | last_name   | VARCHAR(45)   |
	| sakila | actor   | last_update | TIMESTAMP     |
	| sakila | address | address_id  | SMALLINT      |
	| sakila | address | address     | VARCHAR(50)   |
	| sakila | address | address2    | VARCHAR(50)   |
	| sakila | address | district    | VARCHAR(20)   |
	| sakila | address | city_id     | SMALLINT      |
	| sakila | address | postal_code | VARCHAR(10)   |
	....
	| ------ | ------- | ----------- | ------------- |
	89 columns printed

With the large amount of libraries available for Python, the possibilities you have for writing Workbench plugins and modules are now much bigger.

Also, to facilitate executing your own scripts, a new command to run script files has been added to the Tools menu.

Automate Figure Arrangement In Workbench

Initiated by a question on the forum I took some time to exercise my Workbench-Scripting-Knowledge. The question was, if it’s possible to do some figure-arrangement/organizing via scripts. To be more precise, the task I had to master was to get tables that have names starting with a given prefix moved into a new layer using scripts.

The function to create a new object – layer in this case – was an easy one. All you have to do is to use the function grtV.newObj(“workbench.model.Layer”). This function returns a handle to the created Layer. Finally, set the visual properties (color, position) and most important, the owner has to be set to the containing Diagram-object. Last step is to add the layer to the “layers” property of the Diagram-object.
The second step, which i put into a separate function, is to ask the user for the prefix to filter the desired tables/objects that should get grouped onto the new layer. Next, loop through the figures on the rootLayer, compare the names with the given prefix and if match add the object to the “figures” property of the destination layer, remove it from rootLayer.figures and adjust the figure coordinates to line them up in a cascading arrangement.

Fetch the attached .zip file and follow the instructions in the README to add the script to your copy of workbench. This is only a basic implementation so feel free to expand/improve it to perform more sophisticated operations on your models.

Download the scriptfile: organizefigures.zip

Pimp My Workbench

Some weeks ago there was a request about adding a feature that keeps (or gets) datatypes of colums, which are connected via foreign keys, in sync. While we have added this to our ToDo list, I thought of doing some simple LUA exercises to give the user at least a funtion to check if the FKs in the model all have columns with matching datatypes.
As it turned out such tasks can be done quite easily using our GRT-shell. And by writing some functions you can add features to Workbenchs interface.

To get in touch with the shell/GRT environment inside Workbench click the tiny black prompt-window-icon in the top right area of workbench (or choose ‘View->Advanced->GRT Shell’ from the main menu).  Some items in the UI are swapped/added: the sidebar on the right side gets a new tab named ‘GRT Shell’ featuring some more tool-panes (GRT Tree, Modules, Inspector) and on the bottom part of the main window – where the editors usually show up – a black shell Console will pop up.
At the top of the sidebar you can see the module-file-browser. By default the only two files in there are the ones containing our standard modules. When you want to dive further into scripting workbench, its best to look at the functions in the standard-module-files. When you’re ready to add some custom functions to workbench simply create a new user module (click at the “new”-icon at the top of the sidebar) and select the “LUA GRT Module File” Template, give the file a name and click “Add”. You’ll find the needed functions for registring your modules in workbench already set up there for you to adapt. The comments in the code will point out the parts you need to modify. So lets start extending!

As an additional example, I’m adding my approach to the check-FK-problem mentioned at the beginning of this post. Grab the zip, unpack it and put the .lua files into your Workbench-modules directory (detailed instructions can be found in the readme inside the .zip)
UserModules for checking FK-columns consistency