Starting with version 8.0.18 the MySQL Workbench source package finally ships also all our GPL unit and integration tests, which we are using internally to control code quality. For that we had first to replace our old, outdated testing framework by something new and more appropriate. We evaluated quite a few C++ testing frameworks but found them either not to be sufficient or difficult to use. Instead we had something in mind that comes close to the Jasmine framework which is widely used among JS developers. The way it hides all the boring test management details and the clear structure it uses, was quite an inspiration for us and we decided to develop our own testing framework modeled after that.Continue reading “MySQL Workbench now using Casmine for unit and integration testing”
Now that we have the new MySQL community workspace on Slack we opened a workspace channel for our users there. If you have a question or problems with Workbench this is the place to be.
This channel will soon replace our old #workbench IRC channel.
While reworking our initial code completion implementation in MySQL Workbench I developed an approach that can potentially be applied for many different situations/languages where you need code completion. The current implementation is made for the needs of MySQL Workbench, but with some small refactorings you can move out the MySQL specific parts and have a clean core implementation that you can easily customize to your needs.
Since this implementation is not only bound to MySQL Workbench I posted the full description on my private blog.
With the new GA release 6.3.5 we finally also managed to bring MySQL Workbench to Github. This is something we planned for quite some time and now the repository is part of the MySQL group at Github. Go Workbench…
Some years ago I posted an article about the code size in the MySQL Workbench project and talked a bit about the different subprojects and modules. At that time the project consisted of ~400K LOC (including third-party code) and already then the parser was the second biggest part with nearly a forth of the size of the entire project. This parser project back then used the yacc grammar from the MySQL server codebase and was our base for all parsing tasks in the product. Well, things have changed a lot since these days and this blog post discusses the current parsing infrastructure in MySQL Workbench.
Continue reading “Parsing in MySQL Workbench: the ANTLR age”
Edit: added sample table output in MySQL
MySQL Workbench 6.2 introduces support for MS Access migration. This tutorial should help you get your Access tables, indexes, relationships and data in MySQL.
Because MS Access ODBC drivers are only available for Windows, migrating from it is also only possible from Windows. As for the destination MySQL server, you can have it in the same local machine or elsewhere in your network.
MS Access stores relationship/foreign key information in an internal table called MSysRelationships. That table is protected against read access even to the Admin user, so if you try to migrate without opening up access to it, you will get an error like this:
 [Microsoft][ODBC Microsoft Access Driver] Record(s) cannot be read; no read permission on 'msysobjects'. (-1907) (SQLExecDirectW)
The steps to grant read access to Admin are explained below. Unfortunately, the Access UI for that seems to change every version, but we’ll cover at least version 2007.
Preparing a Database Under MS Access 2007
- Open the database in Access
- Under the “Database Tools”, click the “Macro -> Visual Basic” button to open the VB console
- To confirm that you’re logged in as “Admin”, type the “? CurrentUser” and press Enter, in the “Immediate” panel:
- Type the following command to grant access:
CurrentProject.Connection.Execute “GRANT SELECT ON MSysRelationships TO Admin”
Open MySQL Workbench and start the Migration Wizard
From the main MySQL Workbench screen you can start the Migration Wizard by clicking on the Database Migration launcher in the Workbench Central panel or through Database –> Migrate in the main menu.
A new tab showing the Overview page of the Migration Wizard should appear.
To check if you have the ODBC driver installed, click “Open ODBC Administrator” to open the system ODBC tool and look at the Drivers tab.
Important: MySQL Workbench has 32bit and 64bit executables. The ODBC drivers you use must be of the same architecture as the Workbench binaries you are using. So if you’re using Workbench 32bits, you must have 32bit ODBC drivers. Same for 64bits. Because Office 2007 and older was 32bit only and even Office 2010 installs as 32bit by default, you may need to install Workbench 32bits to migrate from Access, even if you have a 64bit machine. If during migration you get an ODBC error about “architecture mismatch between the Driver and Application”, you installed the wrong Workbench.
In the User DSN tab, click on Add… to create a DSN for for your database file. For the example, we created one for the northwind sample database.
Set up the parameters for your source Access database
Click on the Start Migration button in the Overview page to advance to the Source Selection page. In this page you need to provide the information about the RDBMS you are migrating, the ODBC driver to use and the parameters for the connection.
If you open the Database System combo box you’ll find a list of the supported RDBMSes. Select Microsoft Access from the list. Just below it there’s another combo box named Stored Connection. It will list saved connection settings for that RDBMS. You can save connections by marking the checkbox at the bottom of the page and giving them a name of your preference.
The next combo box is for the selection of the Connection Method. This time we are going to select ODBC Data Source from the list. This allows you to select pre-existing DSNs that you have configured in your system.
The DSN dropdown will have all DSNs you have defined in your system. Pick the one you created for the DB being migrated from the list.
In the Default Character Set field you can select the character set of your database. If your Access version uses western/latin characters, you can leave the default cp1252. However if you use a localized version of Access, such as Japanese, you must enter the correct characterset used by your edition of Office, otherwise the data will be copied incorrectly.
Click on the Test Connection button to check whether an ODBC connection can be established. If you put the right parameters you should see a message reporting a successful connection attempt.
Set up the parameters to connect to your target MySQL database
Click on the Next button to move to the Target Selection page. Once there set the parameters to connect to your MySQL Server instance. When you are done click on the Test Connection button and verify that you can successfully connect to it.
Select the objects to migrate
Move to the next page using the Next button. You should see the reverse engineering of the selected database in progress. At this point the Migration Wizard is retrieving relevant information about the involved database objects (table names, table columns, primary and foreign keys, indices, triggers, views, etc.). You will be presented a page showing the progress as shown in the image below.
Wait for it to finish and verify that everything went well. Then move to the next page. In the Source Objects page you will have a list with the objects that were retrieved and are available for migration. It will look like this:
As you can see the Migration Wizard discovered table and view objects in our source database. Note that only the table objects are selected by default to be migrated. You can select the view objects too, but you would have to provide their corresponding MySQL equivalent code later (no automatic migration is available for them yet) so let’s leave them off for now. The same applies for stored procedures, functions and triggers.
If you click on the Show Selection button you will be given the oportunity to select exactly which of them you want to migrate as shown here:
The items in the list to the right are the ones to be migrated. Note how you can use the filter box to easily filter the list (wildcards are allowed as you can see in the image above). By using the arrow buttons you can filter out the objects that you don’t want to migrate. At the end, don’t forget to clear the filter text box to check the full list of the selected objects. We are going to migrate all of the table objects, so make sure that all of them are in the Objects to Migrate list and that the Migrate Table Objects checkbox is checked. Most of the time you’ll want to migrate all objects in the schema anyway, so you can just click Next.
Review the proposed migration
Move to the next page. You will see the progress of the migration there. At this point the Migration Wizard is converting the objects you selected into their equivalent objects in MySQL and creating the MySQL code needed to create them in the target server. Let it finish and move to the next page. You might have to wait a little bit before the Manual Editing page is ready but you’ll end up with something like this:
As you can see in the image above there is a combo box named View. By using it you can change the way the migrated database objects are shown. Also take a look at the Show Code and Messages button. If you click on it you can see (and edit!) the generated MySQL code that corresponds to the selected object. Furthermore, you can double click in a row in the object tree and edit the name of the target object. Suppose you want your resultant database to have another name. No problem: double click on the Northwind row and rename it.
An interesting option in the View combo box is the Column Mappings one. It will show you all of the table columns and will let you individually review and fix the mapping of column types, default values and other attributes.
Run the resulting MySQL code to create the database objects
Move to the Target Creation Options page. It will look like this:
As you can see there, you are given the options of running the generated code in the target RDBMS (your MySQL instance from the second step) or just dumping it into a SQL script file. Leave it as shown in the image and move to the next page. The migrated SQL code will be executed in the target MySQL server. You can view its progress in the Create Schemata page:
Once the creation of the schemata and their objects finishes you can move to the Create Target Results page. It will present you a list with the created objects and whether there were errors while creating them. Review it and make sure that everything went OK. It should look like this:
You can still edit the migration code using the code box to the right and save your changes by clicking on the Apply button. Keep in mind that you would still need to recreate the objects with the modified code in order to actually perform the changes. This is done by clicking on the Recreate Objects button. You may need to edit the generated code if its execution failed. You can then manually fix the SQL code and re-execute everything. In this tutorial we are not changing anything, so leave the code as it is and move to the Data Transfer Setup page by clicking on the Next button.
Transfer the data to the MySQL database
The next steps in the Migration Wizard are for the transference of data from the source Access database into your newly created MySQL database. The Data Transfer Setup page allows you to configure this process.
There are two sets of options here. The first one allows you to perform a live transference and/or to dump the data into a batch file that you can run later. The other set of options gives you a way to tune up this process.
Leave the default values for the options in this page as shown in the above image and move to the actual data transference by jumping to the next page. It will take a little while to copy the data. At this point the corresponding progress page will look familiar:
Once it finishes, move to the next page. You will be presented a report page summarizing the whole process:
And that should be it. Click on the Finish button to close the Migration Wizard.
A little verification step
Now that the Northwind database was successfully migrated, let’s see the results. Open an SQL Editor page associated with your MySQL Server instance and query the Northwind database. You can try something like “SELECT * FROM Northwind.customers”. You should get something like this:
The recently released MySQL Workbench 6.1 continues expanding the capabilities of the tool. This version focuses on performance analysis features for MySQL server workload and queries, taking advantage of performance_schema improvements introduced in MySQL 5.6 and 5.7 DMR.
This is a overview of the new features we’ve added. Some of them are covered in more detail in separate posts.
New Navigator PERFORMANCE Section
The new PERFORMANCE section includes Dashboard, Performance Reports, andPerformance Schema Setup pages. Generally, this new performance reporting feature provides a graphical representation of key statistics from the MySQL server status, and provides an overview of the MySQL server subsystems.
View server performance statistics in a graphical dashboard.
Performance schema based reports that provide insight into the operation of the MySQL server through many high-level reports.
Performance Schema Setup
A GUI for configuring and fine tuning the Performance Schema instrumentation. Initially, this loads an “Easy Setup” page that is enough for most users. Slide the “Performance Schema Full Enabled” slider to YES to enable all available Performance Schema instruments.
Clicking Show Advanced provides methods to fine tune the Performance Schema instrumentation.
Variables can now be organized using custom groupings in the Status and System Variables Management tab.
To create a custom group, right-click on a variable and choose either Add to Custom Category (to create a new category), or an existing custom category.
Additional viewing options were added for executed statements:
Available previously, and it remains the default view.
You can now edit records row by row in a form style editor.
Displays information about the selected fields, similar to passing in
--column-type-info from the command line client.
Query statistics are taken from the Performance Schema, and includes information about timing, temporary tables, indexes, joins, and more.
Several behavioral improvements were made to the MySQL Workbench Home screen, including:
- Connection tiles can now be repositioned by using drag and drop
- A script or model file can be dragged into a MySQL connection tile
- The following right-click options were added to the connection tiles: Copy JDBC Connection String andCopy Connection String
- Right-clicking a blank area in the MySQL Connections area now offers an option to create aNew Connection From Clipboard
The layout changed, and additional information is now viewable by hovering over the fields. It also displays traditional
EXPLAIN output in a separate tab, and the Raw Explain Data (as JSON) in another. For MySQL server 5.7+, the new “cost information” (such as “query_cost” and “sort_cost) is also utilized.
View table information, similar to the Schema Inspector. This also has a simpler and easier to use interface for analyzing and creating indexes for your tables.
Additional information was added to the Client Connections tab, such as Thread ID, Parent Thread, Instrumented, and Type.
Also, a Thread Stack view option was added by right-clicking a connection entry in the Client Connections tab and choosing View Thread Stack.
- MSAA (Windows Accessibility API) support and High contrast color theme in Microsoft Windows
- MySQL Enterprise Backup improvements
- Improvements with general performance and overall stability
The MySQL Workbench team just uploaded a new video to the MySQL channel at Youtube. This video is meant for beginners and describes the process of creating and troubleshooting connections in MySQL Workbench.
The Performance Schema Reports feature from MySQL Workbench show summaries of the many statistics gathered by the MySQL performance_schema. By inspecting these reports, you can get some insight of what’s happening in a MySQL server in aspects such as:
- I/O by amount of data
- I/O by latency/time spent
- Index usage
- Performance critical operations performed by queries of the same type (table scans, index scans, temporary tables, sorts etc)
The MySQL SYS Schema
MySQL 5.5 introduced the performance_schema feature. performance_schema contains tables that log server performance and activity statistics. You can inspect it to have a clearer understanding about what kind of work is the server doing, how much time is spent doing that, resources used globally or by individual queries etc. MySQL 5.6 expanded on it and adds even more information, making it a lot more powerful.
However, the performance_schema tables mostly contain raw data and require some processing and SQL magic to make it palatable so you can get the most of it. And that is what the new SYS schema project provides. The SYS schema has many views that process data in performance_schema providing high-level reports and stored procedures that make it easier for advanced users to work with the performance_schema.
The SYS schema is bundled in MySQL Workbench 6.1 and is – after confirmation – automatically installed in supported MySQL servers. Currently, only MySQL 5.6 and 5.7 are supported, because the performance_schema in MySQL 5.5 still lacked maturity and did not yet provide enough information to be very useful.
To view the reports, the SYS schema must be installed and the appropriate P_S instrumentation has to be enabled. If not, Workbench will prompt you to click a button to enable it.
The required instrumentation is (if you choose to change it manually):
- current and history_long consumers must be enabled for all events
- all wait instruments must be enabled and timed
- all stage instruments must be enabled and timed
- all statement instruments must be enabled and timed
The currently available reports are listed below. As more views are added to the SYS schema, Workbench will also be updated to list them.
- Top File I/O Activity Report
Show the Files doing the most IOs in bytes
- Top I/O by File by Time
Show highest IO usage by file and latency
- Top I/O by Event Category
Show the highest IO Data usage by event categories
- Top I/O in Time by Event Categories
Show the highest IO time consumers by event categories
- Top I/O Time by User/Thread
Show the top IO time consumers by User/thread
- Statement Analysis
Lists statements with various aggregated statistics
- Statements in Highest 5 Percent by Runtime
List all statements who’s average runtime, in microseconds is in highest 5 percent
- Using Temp Tables
Lists all statements that use temporary tables – access the highest # of disk temporary tables, then memory temp tables
- With Sorting
List all normalized statements that have done sorts, access in the following priority order – sort_merge_passes, sort_scans and sort_rows
- Full Table Scans
Lists statements that have performed a full table scan. Access query performance and the where clause(s) and if no index is used, consider adding indexes for large tables
- Errors or Warnings
List statements that have raised errors or warnings.
- Schema Object Overview (High Overhead)
Shows count by object type for each schema.
Note: On instances with a large number of objects, this can take some time to execute.
- Schema Index Statistics
- Schema Table Statistics
- Schema Table Statistics (with InnoDB buffer)
- Tables with Full Table Scans
Find tables that are being accessed by full table scans ordering by the number of rows scanned descending
- Unused Indexes
List of indexes that were never used since the server started or since P_S data collection started.
- Waits by Time
Lists the top wait events by their total time, ignoring idle (this may be very large)
- Waits by User by Time
Lists the top wait events by their total time, ignoring idle (this may be very large)
- Wait Classes by Time
Lists the top wait classes by total time, ignoring idle (this may be very large)
- Waits Classes by Average Time
Lists the top wait classes by average time, ignoring idle (this may be very large)
- InnoDB Buffer Stats by Schema
Summarizes the output of the INFORMATION_SCHEMA.INNODB_BUFFER_PAGE table, aggregating by schema
- InnoDB Buffer Stats by Table
Summarizes the output of the INFORMATION_SCHEMA.INNODB_BUFFER_PAGE table, aggregating by schema and table name
The SQL Editor in MySQL Workbench 6.1 adds a new interface for query results. This addition offers 4 new views to your query results, 2 for the result data itself and 2 for meta-information about the query that was executed.
Query Result Set Grid
The traditional result set grid. Run a SELECT query on a table with a primary key and you can edit the data. You must click the Edit button to enter edit mode.
Note: Until Workbench 6.1.1, the check was being done automatically for every SELECT query, but since that requires extra queries to MySQL, the check is now done on demand.
Result Set Form Editor
The new form editor for result sets comes in handy when you want to closely inspect the fields of each record (specially if it has multi-line text). You can also edit the individual records, if your result set is editable.
Result Set Field Types
Here, you can inspect information about the fields that were returned by the MySQL server in your query results. Similar to the —column-type-info option from the command line client, it will show you the schema and table from where the field comes from and type information.
Performance Schema Statistics
This tab uses data gleaned from the performance_schema (in MySQL 5.6) to gather some key statistics collected about the execution from your query, as collected by the server. To have this tab, you need to have the performance_schema enabled with statement instrumentation.
You can read about the meaning of each item in the MySQL performance_schema documentation, but here’s a summary of some key items:
- Timing: the timing information shown in the Action Output area in Workbench is the query execution time as measured at the client side, so it will include network travel time. But here you also have the timing as instrumented by the server itself. This includes the amount of time waiting for table locks, as a separate value.
- Rows Processed: the number of rows that were evaluated and then generated to be sent back to the client
- Temporary Tables: the number of temporary tables that had to be created for the query to be executed
- Joins per Type: the number of JOINs that were executed for the query, broken down by type. This is similar to the info you’d get from EXPLAIN.
- Sorting: number of data that had to be sorted by the server.
- Index Usage: you can see here whether table scans had to be performed without using an index.
You can disable fetching of this information from the Query -> Collect Performance Schema Stats menu item. You may want to do that if you don’t need the stats, since an extra query has to be executed for every query you run.