Introduction to SQL Server 2014 CTP1 Memory-Optimized Tables

There are a number of new features that became available with SQL Server 2014.  One of the more exciting features is the new Memory-Optimized tables.  In this article I will explore how to create Memory-Optimized tables, and what I’ve found during my initial exploration of using this new type of table.

What is a Memory-Optimized Table?

There are two different kinds of Memory-Optimized Tables: SCHEMA_AND_DATA, and SCHEMA_ONLY.  The SCHEMA_AND_DATA Memory-Optimized table is a table that resides in memory where the data is available after a server crash, a shutdown or a restart of SQL Server. Whereas a SCHEMA_ONLY Memory-Optimized table is a table that does not persist data should SQL Server crash, or the instance is stopped or restarted.  The SCHEMA_ONLY Memory-Optimized tables do retain their table structure should the server crash, or be shutdown.

Each one of these Memory-Optimized table types has their value.  For instance a SCHEMA_ONLY table would be useful for a staging table in a data warehouse application.   Typically it is fairly easily to reload a data warehouse staging table from its data source. This is why making these type of tables a SCHEMA_ONLY type table is relatively safe.   A point of sales transaction data might be a good fit for a SCHEMA_AND_DATA type table.  You might want your point of sales transactions to run as fast as possible so the memory-optimize type table would provide this, but you wouldn’t want to lose those transactions should your server be restarted.

Creating a Database to Support Memory-Optimized Tables

In order to create a Memory-Optimized table you first need a database that will support Memory-Optimized tables.  To do that you can either create a brand new database that contains a filestream filegroup, which is needed to support the Memory-Optimized tables, or you can ALTER an existing database to create this new filegroup.  

In all of my examples below I will be using a new database name InMemory, which will contain all of my different Memory-Optimized tables.  I will show you how to use a script or use SQL Server Management Studio to create this database.  First here is the script to create this database:

-- =============================================
-- Create InMemory Database
-- =============================================
IF EXISTS (
  SELECT * 
    FROM sys.databases 
   WHERE name = N'InMemory'
)
  DROP DATABASE InMemory
GO
 
CREATE DATABASE InMemory
ON PRIMARY
  (NAME = InMemory_Data,
    FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\InMemory_Data.mdf',
          SIZE = 100MB,
          FILEGROWTH = 10MB),
 
FILEGROUP InMemory_InMemory CONTAINS MEMORY_OPTIMIZED_DATA
  ( NAME = InMemory_InMemory,
    FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\InMemory_InMemory.mdf')
 
LOG ON
  ( NAME = InMemory_Log,
    FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\InMemory_Log.ldf',
          SIZE = 100MB,
          FILEGROWTH = 10MB)

GO

If you review this script you can see that I created a FILEGROUP named “InMemory_InMemory” that will be used to support my Memory-Optimized tables.  This filegroup contains a single file.  Without this “MEMORY_OPTIMIZED_DATA” filegroup I wouldn’t be able to create a Memory-Optimized table in my new database.

In order to also show you how to create the same database using SQL Server Management Studio I must first run the drop portion of the script above.

To create a database using SQL Server Management Studio I first right click on the “Database” node, then select the “New Database” option from the menu.  Doing this brings up the “New Database” window.  In that window I identify the name of my InMemory database, so the window looks like this:

New Database
New Database

I then click on the “Filegroups” option in the left pane of this window, which brings up the filegroup window of the New Database dialog.  In the filegroup window I click on the new “Add Filegroup” button under the “MEMORY_OPTIMIZE DATA” section of the right pane, then I enter the name of my new memory-optimized filegroup as shown in the screen shot below:

Enter the name of the new memory-optimized filegroup
Enter the name of the new memory-optimized filegroup

I then click on the “General” option in the left pane to add a new file in my new InMemory_InMemory filegroup.  I do that by clicking on the “Add” button, then selecting a “File Type” of “FILESTREAM”, and then specifying the “Logical Name” of  “InMemory_InMemory “ as shown in the screenshot below:

Specify the Logical Name
Specify the Logical Name

All that is left is to just click on the “OK” button to create my new Memory-Optimized database named InMemory.

Creating Memory-Optimized Tables

Once I have completed creating my new Memory-Optimized database “InMemory” using one of the two methods shown above I can move on and create some InMemory tables.   For this demo I am going to create two Memory-Optimized tables.  One table will be a SCHEMA_AND_DATA table and the other will be a SCHEMA_ONLY table.  I will create one table with a script, and the other I will create with SQL Server Management Studio.  I will also be creating a normal table so I can show the performance improvements you get with Memory-Optimized tables when inserting data into these different types of tables.

The first table I am going to create is the my “Normal“ table.  It is created just as you normally would create a table.  Here is the script to create this table:

--Drop My Normal Table if it exists 
IF OBJECT_ID('Normal', 'U') IS NOT NULL
    DROP TABLE Normal
GO
 
--Create My Normal table that is not Memory Optimized
CREATE TABLE Normal
(
       Id int NOT NULL, 
       C1 char(1000) NOT NULL,
    CONSTRAINT PK_Normal PRIMARY KEY NONCLUSTERED (Id) 
)

There is nothing special here.  As you can see I created a table with two columns.  Note all three of the tables that I will be creating will have the same two columns. 

Next I will create my SCHEMA _AND_DATA table, which will use the DURABILITY = SCHEMA_AND_DATA option.  Using the DURABILITY = SCHEMA_AND_DATA option tell SQL Server that I want my Memory-Optimized table to retain data should SQL Server stop for some reason.  I will be creating this table using the following Transact-SQL script:

--Drop table Drop My Schema_And_Data Memory-Optimized Table if it exists
IF  OBJECT_ID('MemoryOptimized_Schema_And_Data','U') IS NOT NULL
    DROP TABLE MemoryOptimized_Schema_And_Data
GO
 
CREATE TABLE MemoryOptimized_Schema_And_Data
(
       Id int NOT NULL, 
       C1 char(1000) NOT NULL,
       CONSTRAINT PK_MemoryOptimized_Schema_And_Data PRIMARY KEY NONCLUSTERED HASH (Id) WITH (BUCKET_COUNT = 1024) 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA)

If you review this CREATE TABLE script you will see some new options are used to create this Memory-Optimized table.  The first thing you will notice is I am creating a non-clustered HASH index on the column “Id” that has a BUCKET_COUNT of 1024.  What is this all about?  Memory-Optimized tables require that you have at least one HASH index defined, and cannot have more than 8 total indexes.  With CTP1 only columns with Windows BIN2 collation types can be used for a HASH index.  Therefore on my table I could only create a HASH index on the int column and not the char column.  The “BUCKET_COUNT” identifies the number of different buckets SQL Server will create in memory to store my Memory-Optimized table records.  Each bucket is identified by the value created when hashing the index column.  Each unique index key value that has the same hash value will be stored in the same bucket.  Therefore it is recommended that you create a bucket value that is equal to or greater than the number of unique key values you expect for your Memory-Optimized table.   For more on the internals on how Memory-Optimized tables are stored read Kalen Delaney’s white paper identified in the “Additional Resources” section below. 

Additionally something that is worth noting is Memory-Optimized tables only support the following data types: bit, tinyint, smallint, int, bigint, money, smallmoney, float, real, datetime, smalldatetime, datetime2, date, time, numberic, decimal, char(n),  varchar(n) ,nchar(n),  nvarchar(n), sysname, binary(n), varbinary(n), and Uniqueidentifier. Notice that none of the large Binary Object data types are allowed, even the variable character “max” data types.  Something worth also mentioning is the combined record length must not exceed 8060.  This record length limitation will be enforced at the time you create your table.

The last table I will create will be named “MemoryOptimized_Scheme_Only”.  This table will have DURABILITY set to “SCHEMA_ONLY”, and I will create this table using SQL Server Management Studio.  To do this you start creating a Memory-Optimized table just as you would to create a normal table by expanding your “Database” item in the Object Explore in SQL Server Management Studio and the click on the “Table” item, and then hovering over the “New” item which will display the “Memory Optimize Table…” option as displayed in the screenshot below:

Memory Optimized Table Option
Memory Optimized Table Option

Note this memory option is only available on databases that have a Memory-Optimized filegroup.

To create my Memory-Optimized table I will click on the “Memory Optimize Table…” item in the menu, which will bring up a template in the query window to create your Memory-Optimized table.  Once I specify all the template parameters and clean up the generated code, below is the code I will use to create my third table.  I know, using a template and then cleaning up the script is not truly a GUI driven interface, but at least it creates a template to allow you to create your Memory-Optimized table.

--Drop table Drop My Schema_Only Table if it exists
IF  OBJECT_ID('MemoryOptimized_Schema_Only','U') IS NOT NULL
    DROP TABLE MemoryOptimized_Schema_Only
GO
 
--Create My Schema_Only Table
CREATE TABLE MemoryOptimized_Schema_Only
(
       Id int NOT NULL, 
       C1 char(1000) NOT NULL,
    CONSTRAINT PK_MemoryOptimized_Schema_Only PRIMARY KEY NONCLUSTERED HASH (Id) WITH (BUCKET_COUNT = 1024) 
) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)
GO
 

Now that all three of my tables are created I can move on to populating them with data.

Inserting Data into a Memory-Optimized Table  

To test loading my Memory-Optimized tables I decided to build a script that would load 1000 rows in each of my three tables.  Below is the code that I used to test loading my Normal table and my two Memory-Optimized tables:

SET NOCOUNT ON;
 
USE InMemory;
go
 
-- Cleanup from prior INSERT executions
DELETE FROM  Normal;
DELETE FROM MemoryOptimized_Schema_And_Data;
DELETE FROM MemoryOptimized_Schema_Only;
 
SET STATISTICS IO Off;
SET STATISTICS TIME Off;
 
DECLARE @s datetime = getdate()
 
-- Load Normal Table
DECLARE @I int = 0;
WHILE @I < 1000
BEGIN
       SET @I+=1;
       INSERT INTO Normal(Id,C1) VALUES (@i,cast(@I as varchar(4)) + 'A');
END;
 
SELECT DATEDIFF(ms,@s,getdate()) as Normal;
 
-- Load SchemaAnadData table
SET @s = getdate();
SET @I = 0;
WHILE @I < 1000
BEGIN
       SET @I+=1;
       INSERT INTO MemoryOptimized_Schema_And_Data(Id,C1) VALUES (@i,cast(@I as varchar(4)) + 'A');
END;
 
SELECT DATEDIFF(ms,@s,getdate()) as SchemaAndData;
 
-- Load SchemaOnly table
SET @s = getdate();
SET @I = 0;
WHILE @I < 1000
BEGIN
       SET @I+=1;
       INSERT INTO MemoryOptimized_Schema_Only(Id,C1) VALUES (@i,cast(@I as varchar(4)) + 'A');
END;
SELECT DATEDIFF(ms,@s,getdate()) As SchemaOnly;
 
 

By reviewing this script you can see that I displayed the time it takes to load 1000 rows into each table.   Here is the output of one of my test runs:

Normal
-----------
1426
 
SchemaAndData
-------------
1746
 
SchemaOnly
-----------
146

As you can see I was able to load my “Normal” table faster than my “MemoryOptimized_Schema_And_Data” table.   I ran my script a number of times and sometimes loading rows into the “MemoryOptimized_Schema_And_Data” table outperformed loading rows into my “Normal” table.  But when it did outperform it didn’t do it by a lot.  Additionally you can see that inserting rows into my “MemoryOptimized_Schema_Only” table outperformed  loading data into the other two tables.  No matter how many times I ran this script loading the table that had a DURABILITY setting of SCHEMA_ONLY always out performed loading into the other two tables.

It is understandable that inserting data into my “Normal” and “MemoryOptimized_Scheme_And_Data” table might perform about the same.   I say this because Memory-Optimized table that has a DURABILITY mode of “SCHEMA_AND_DATA” needs to perform I/O to the checkpoint and transaction log when records are inserted, just like a normal table.  This additional I/O when data is written to a table slows down the INSERT performance.  This additional I/O allows for the “SCHEMA_AND_DATA” tables to be recoverable should I have a server crash or my server needed to be restarted.  Whereas the rows inserted into my “MemoryOptimized_Schema_Only” table will not be recovered should SQL Server crash or be restarted.   If inserting data into your table is important then looks like you should consider using Memory-Optimized tables that have a DURABILTY setting of MemoryOptimize_Schema_Only”.

Limitations of Memory-Optimized Tables

Memory-Optimized tables have great potential for improving application performance, but they do not operate exactly the same as a normal table.  Here is a list of a few things I ran into when I was exploring Memory-Optimized tables for this article:

  • I could not issue a TRUNCATE TABLE statement against my Memory-Optimized tables.
  • I could not perform an ALTER TABLE statement against my Memory-Optimized tables.
  • I could not update my primary key columns of my Memory-Optimized tables.
  • Here are a few more documented limitations:
  • They cannot contain an identity column.
  • They do not support DML Triggers.
  • They cannot have FOREIGN KEY or CHECK constraints.
  • They cannot have UNIQUE constraints other than the PRIMARY KEY.
  • They will only support up to 8 different indexes.

This is probably not a complete list of limitations.  Possibly as part of your testing you will find additional limitations.

What happens To Memory-Optimized Tables When Restarting SQL Server or Taking a Database Offline?

It is important to know what happens to your Memory-Optimized data should SQL Server crashe, be restarted, or your database that contains Memory-Optimized data is taken offline.  The DURABILTY mode you used when creating your Memory-Optimized table will determine whether the data in your Memory-Optimized table will be retained when SQL Server crashes or restarts.  If you choose to defined your Memory-Optimized table using DURABILITY = SCHEMA_ONLY then your data will be lost, but if you used DURABILITY = SCHEMA_AND_DATA then your data will be available after SQL Server is restarted. 

When SQL Server restarts or the database is brought back online the Memory-Optimized tables that are defined with the SCHEMA_AND_DATA DURABUILTY option will be read from the checkpoint and transaction log files and repopulated in memory.   One thing to note is while SQL Server repopulates memory with the SCHEMA_AND_DATA Memory-Optimized tables your database will be in a recovering state and will be unavailable.  Below is what my Object Explorer shows right after I restarted my SQL Server 2014 CTP1 instance. 

Object Explorer- InMemory (In Recovery)
Object Explorer- InMemory (In Recovery)

Note that my InMemory database is in the “In Recovery” state.  Once all the data is loaded into memory for my Memory-Optimized tables than the database will become available.    Also when I look in the ERRORLOG file I found messages like this indicating the Memory-Optimized tables are being repopulated:

2013-08-20 23:19:41.96 spid26s     Recovery of database 'InMemory' (7) is 0% complete (approximately 911 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.
2013-08-20 23:19:50.43 spid26s     Recovery of database 'InMemory' (7) is 1% complete (approximately 889 seconds remain). Phase 1 of 3. This is an informational message only. No user action is required.
2013-08-20 23:21:22.81 spid9s      0 transactions rolled back in database 'InMemory' (7:0). This is an informational message only. No user action is required.
2013-08-20 23:21:22.81 spid9s      Recovery is writing a checkpoint in database 'InMemory' (7). This is an informational message only. No user action is required.
2013-08-20 23:21:22.84 spid9s      Recovery completed for database InMemory (database ID 7) in 101 second(s) (analysis 62991 ms, redo 1635 ms, undo 24429 ms.) This is an informational message only. No user action is required.
2013-08-20 23:21:22.96 spid9s      Recovery is complete. This is an informational message only. No user action is required.

Additional Resources

My examples of how to use the Memory-Optimized tables are quite simple.  If you want to explore more about the Memory-Optimized tables you might what to review the following documentation that Microsoft has provided regarding Memory-Optimized tables:  http://msdn.microsoft.com/en-us/library/dn133186(v=sql.120).aspx

If you want to read Kalen Delaney’s white paper about Memory-Optimized tables then you can find it here.

More to Come

My research for this article was done using the CTP1 release of SQL Server 2014.  I’m sure there will be more features to come related to Memory-Optimized tables as Microsoft releases newer versions of SQL Server 2014.   If you haven’t already downloaded and installed the latest CTP for SQL Server 2014 then you should consider doing that.  By installing SQL Server 2014 you can further explore the Memory-Optimized tables and the other new features that came out with SQL Server 2014.

See all articles by Greg Larsen

Gregory Larsen
Gregory Larsen
Gregory A. Larsen is a DBA at Washington State Department of Health (DOH). Greg is responsible for maintaining SQL Server and other database management software. Greg works with customers and developers to design and implement database changes, and solve database/application related problems. Greg builds homegrown solutions to simplify and streamline common database management tasks, such as capacity management.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends & analysis

Latest Articles