This session focused on the 'ease of use' release theme. The presenter stated that people are buying 2 to 300 copies of SQL Server at a time.
This session focused on the 'ease of use' release theme. The presenter stated that people are buying 2 to 300 copies of SQL Server at a time.
Microsoft's goal is to move out of the workgroup
niche and have SQL Server on the desktop and in large organizations.
For the desktop, they want to have SQL Server
self-configuring, with no DBA requirement. For workgroups, they want it to be
self-managing. For large organizations, they want the lowest total cost of
ownership, to support multi-server operations and to provide interoperability.
There will be 20 Wizards in Beta3, 5-10 more by ship date.
For the DBA, they are providing a data base designer This
is basically the Visual Database Tools diagrammer. It is not a full-blown data
modeling tool like ERWin or System Architect. There was a demo of the database
diagrammer. It will be possible to print the database diagrams by release; right
now it isn't. SQL Trace has been renamed SQL Profiler and is much
enhanced. The Index Tuning Wizard is almost frightening, but since it
doesn't handle concurrency issues I don't think I'll be out of
work yet! A functioning (unlike previous efforts) graphical showplan is
available and it is really good. Shows the costing info and decision process
clause by clause.
For data warehousing, they are providing the Data
Transformation Services, the Microsoft Repository, which will be part of SQL
Server, and Microsoft English Query. (More on both of these later.)
Distributed Management Framework and DMO
For ISV's and MIS Applications, they are going to
make it easy to embed and deploy SQL Server. SQL-DMO provides a COM
administrative interface. There is something called the SQL-namespace which I
don't completely understand. It is composed of administration objects and
the administration UI. You are supposed to be able to put specific dialogues or
wizards into any tool that supports SQL Server. I don't know how to do
his yet, but it would clearly solve some of the problems of presenting some but
not all functions to a power user who isn't a full-fledged administrator.
The new Distributed Management Framework looks like
You will be able to use VB to launch a wizard, as well.
The events layer is used by the SQL Profiler, which allows it to see
"inside", stored procedures (the current SQL Trace only shows you the
exec statement for the procedure). Things like table scans and locks will also
be events. I assume this means we might be able to set up alerts for a table
scan of a big table but I don't know for sure.
The 7.0 Enterprise Manager is a "snap-in" for the Microsoft
Management Console. You can also manage SQL Server 6.5 servers from the MMC but
not with the same tool. They are simply building the 6.5 Enterprise Manager into
There are many new wizards including a much more powerful maintenance plan
wizard and import and export wizards for Data Transformation Services.
Web-based administration will be supported, initially for monitoring through
HTML pages shared with the MMC.
DMO is substantially changed. All administration functions are supported
including DML and DDL. There is an integration interface for ISV's. DMO can be
hosted in COM products (VC ++, VB, Visual J++, MTS, Active Server pages) and
Here is the new object model:
Existing DMO applications will not work against the 7.0 DMO. However,
Microsoft plans to have an upgrader for 6.5 DMO. Don't ask me how or what.
Backup and Recovery
The way databases are backed up when users are on line has been changed. In
6.5, the backup is of a consistent set of pages. Transactions are delayed until
affected pages are written. This affects transaction throughput. The disk is
accesses in non-optimal order because of this. The backup process copies data
The 7.0 backup uses a "fuzzy backup" and includes the transaction
log to make it consistent. There is minimal impact on running transactions
(figures given were that backup with live users on system had about a 10% impact
on users and 10-20% impact on the backup). There is no out-of sequence list. The
pages being backed up are not cached so it is fast.
SQL Server 7.0 will have an incremental backup everything changed since the
last full backup. This is used mainly to reduce recovery time. You would restore
the last full, the last incremental and the log backup instead of several log
backups. It may also reduce the time to backup.
Files and file groups can be backed up individually. This is to support
VLDB's with tight backup window. You will need to still consider Referential
Integrity, etc., if you are backing up files and file groups. You must back up
the transaction log separately since it is in a file of its own and won't be
automatically done as part of file group backup.
You can restore files and file groups as well. This is useful for recovery
from isolated media failure. There are multiple restore plans possible. For
example, suppose you have a full backup on Sunday, an incremental on Tuesday and
Thursday and log backups for Tues, Weds and Thurs. If you crash later on
Thursday you could
restore from Sunday's full, Thursday's incremental and Thursday's log
restore from Sundays full, Tuesday's incremental and Tues Thursday's logs.
This helps with the "My tape broke" problem.
There is an assisted restore that builds a restore plan based on available
backups: either a full database restore or only damaged files. The assisted
restore allows a choice of alternate plans. It performs the restore
Additional features of backup and restore:
Database is created automatically on restore
Files can be relocated
Backup supports Microsoft Tape Format (MTF)
Allows sharing of media among backup packages such as NT Backup
You can resume interrupted operation (backup and restore) near the point of
Verify a backup set without actually restoring it.
There is no table-level backup and restore in 7.0.
As to performance, they say that the backup/restore runs at media speed.
There is minimal impact on the server (90%+ transaction throughput while
maintaining 80+ backup speed). Scales up well as backup devices are added. Fast
rollforward is 2-10 times (typically 4 times) faster than 6.5. Third party
backup products can achieve the same performance.
As to DBCC, the message was "you don't need to run it". However, it
is very fast, and scans at near disk speed. It does a single scan of the
database checking all objects in parallel (Current DBCC checks each tables one
after another, checks each secondary index next). The 7.0 DBCC will generate its
reports as a result set and will optionally repair some problems. Performance
Throughput vs. 6.5
Single table with non-clustered index
Single table, 1 clustered index
Single table, 1 clustered, multiple non-clustered
95 137 X
Full database (complex customer, 47GB)
Why you don't have to run DBCC: There is a fast fail philosophy; problems are
caught immediately. SQL Server 7 has simpler data structures. They recommend you
run it until you gain confidence in 7.0 and that eventually you will find that
it is not part of regular maintenance.
The proposed things that DBCC will repair (not definite for the 7.0 release)
are Allocation structures and minor BTREE inconsistencies. It may rebuild
damaged indexes, remove damaged BLOBS and records. Its repairs can be rolled
Bulk Data Load (BCP)
There is a new technique for bulk loading that uses OLEDB internally. You can
use constraints and indexes and still have fast loading. A single client loads
twice as fast as 6.5. Parallel loads into a single table scale linearly with the
number of CPUs. Special techniques for BLOBs give faster load rates 5.1 MB/sec
from a single client.
Alerts and Jobs ("The artist formerly known as Tasks"
These remain under the control of the SQL Agent (equivalent to the SQL
Executive of 6.5)
Very little appears to have changed in the alerting mechanism. The presenter
said that there were threshold alerts, but the Books On Line (Beta 2) only refer
to performance monitor threshold alerts which are just the same as they are now.
Perhaps this feature will be added in Beta 3.
The Job "engine" is much enhanced. It permits multiple step jobs
with dependencies. You can Script jobs with VB script. Jobs can be scheduled on
a regular (calendar) basis as they can be in 6.5. They can also be scheduled to
run on CPU idle, on/above/below a threshold. Jobs can be triggered by alerts
just as they can in 6.5
Multi-server operation is built into the SQL Agent. Jobs can be defined
centrally and support remote execution on multiple servers. Status of jobs can
be rolled up to this central server. These jobs are "pulled" to remote
servers. This increases scalability.
For example, one might define jobs on a server called "Queen" that
backup databases on Servers A, B, and C. Servers A, B, and C connect to the
Queen server and pull the jobs down. They execute them on their own schedule and
report the results to the Queen. They do not need to be connected to the Queen
server while running their jobs.
The demo for this section was the new version of SQLMaint. Now it will
include all user db's in the maintenance plan. If you add a database, it
automatically gets included in the plan. It will be possible to log success or
failure of the SQLMaint runs to a table which can be either local or remote.
Data Transformation Services
The presentation on this topic in this session was short. I will describe DTS
in a separate article.
This is a replacement for the SQL trace program in 6.5. It serves as a
monitoring tool and also can gather a "workload" for use by the Index
It captures Open Data Services, Query Processor, Transaction Log and Lock
"events". It will be possible to see "inside" stored
procedures and triggers rather than just seeing the "EXEC MyProc" statement. It uses a general-purpose event model that looks like this:
It is clear that this model is extensible, though I have not yet learned how
to define my own event producer or custom consumer.
The SQL Server 7 release includes a Query Analyzer which is very similar in
nature to the current ISQL/W. The graphical ShowPlan is available through this
tool, as is the index tuning wizard (the latter can be run standalone as well.)
The index tuning wizard is very clever. It got two out of three indexes right on
one of my test cases (ones I give students in my classes). It did not propose
the third index, and indeed, that index gives only a miniscule performance
increase any way. It found the two "big bang" indexes very quickly.
Casey Kiernan (Product Manager for all of the new tools) reported that the index
wizard seemed to suggest much wider, covering, indexes than people would
normally expect, but that they work very well.
SQL Server on the Desktop
The presenter stated that the features aren't yet cast in concrete. I have a
little trouble with that statement as Beta 2 is supposed to be largely feature
complete, and this seems to me to mean that there won't be substantial changes.
But I could be wrong.
The goal is to have SQL Server everywhere. (Though they didn't talk about a
Windows CE version, so I guess it won't be in my refrigerator any time soon!)
The plan is to have the same product on all platforms. Pricing, packaging and
licensing decisions are not complete yet. However this appears to be what is
going to happen:
Desktop SQL Server - runs on Windows 9x and Win NT Workstation
SQL Server - runs only on WinNT server
SQL Server Enterprise - runs only on WinNT Enterprise Server
Plain SQL Server today runs on WinNT workstation; this probably won't be
possible in SQL Server 7.0.
Desktop SQL Server will be a full-featured product that supports a single
user or a small workgroup. It will use the same programming model as the server
version and applications will run on all versions. It will be optimized for
smaller systems and memory - apparently the goal is to get it to run on a 16MB
machine. It will probably support a maximum of 5 users (hard limit, not
expectation of how many can connect and have it still continue to function
Limitations in the product will be those imposed by the platform. For
example, clustering won't be supported, integrated security, async I/O and named
pipes are not available in Windows 95/98. [The absence of integrated security
seems to me that it could make some apps that run on Server fail on Win 9x.]
The presenter reported that the Win 9x version has been running on some
machine since January with no signs of DB corruption. The desktop version will
have the same dependability, integrity and recoverability as the server
The automatic tuning features of the Server product and the automatic
resource management (database and log autogrow and autoshrink, allocate and free
memory depending on OS load) will be in the desktop product.
The desktop version will be able to do the same distributed heterogeneous
queries as the server version.
Replication will be supported, but there are some limits. A database on SQL
Server on NT Workstation or Windows 9x can publish only if one is using Merge
replication (see forthcoming article.) For transaction-based replication,
databases on the desktop platform can only participate as subscribers, they can
There is also to be an embedded SQL Server. This is a minimal version
consisting of the database engine and core components. This is designed to allow
people to build SQL Server into their applications "invisibly". It is
intended to provide a lightweight, full function database with a small disk
footprint (target: < 30 MB) and low memory requirements (2-5 MB under
Operating System pressure). The embedded version can be built into ISV
applications and is the version available through the next version of Microsoft
The presenter also talked about how SQL Server will integrate with the next
release of Access. There will be a setup option to use the SQL Server engine; it
will not be the default. Access can use a local or a remote SQL Server. The
Access forms and reports will still be available, and the "DaVinci" design tools [Visual Database Tools] will be available for tables, queries,
views, schema, stored procedures. It will be possible to do basic SQL Server
admin (security, replication, database creation, backup and restore) from the
Access 9 will connect natively to OLE/DB data sources (I presume this means
without JET). Application objects will be stored in an ADP (Access Database
Project) file. This can include the connection to SQL Server. There will be
client-side caching for form and table browse. There will be updateable snapshot
cursors, local filtering, sort and search, and "optimistic updates".
[I assume this means something like the client batch libraries in VB 5.]
Microsoft's goal for upgrading databases is to make it easy, fast, reliable
and compatible. If you upgrade a SQL Server 6.5 database to 7.0, it will run in
6.5 compatibility mode. You will be able to turn this off once you are certain
(more below) that everything will work OK.
You can specify the level of verification you want when you upgrade. You can
not upgrade individual tables.
There is a wizard to guide you through. It does a fully automatic upgrade of
your server configuration, security, objects, data, replication settings and
tasks. You can also manage this with INI files. (don't know how yet.)
For a single machine upgrade, you must use named pipes. The upgrade will
check for disk space as it is going to make a copy of the database, though there
is an option to use a tape. You can not upgrade a 6.5 database in place, there
must be a copy. You can upgrade 6.0 and 6.5 databases to 7.0. You must upgrade
4.2 databases to 6.x before upgrading to 7.0.
You do not have to upgrade everything at once, and you can stop and restart
as you wish. If servers are involved in replication, they can upgrade in any
order, with the exception of a separate distribution server which must be
The upgrade is designed to be fast. There is a special export engine and a
custom OLE DB driver optimized for high speed import. Timings they have gotten
so far include (using a pipeline, not a tape)
Neither server can be in use during the upgrade. The number of databases
doesn't influence the time unless you select the detailed verification (CRC
The upgrade is a CPU intensive task, and they recommend that you get rid of
any unneeded objects before upgrading. Factors that influence the speed are:
Number of objects
It is reliable. It does not lose or duplicate data and you don't have to
start all over in case of a failure. It does not leave the database in an
inconsistent state. You can request comparisons of
Row counts for all tables
CRC checks of all data
Microsoft will have upgraded 1000 different customer databases by ship
They are striving to preserve compatibility between 6.0/6.5 databases and
provide compatibility modes which
Avoid keyword conflicts
Use implicit order by on group by
6.5 handling of empty strings
Do not require aliases on update statements
The system databases are always in 7.0 mode. The compatibility setting is on
a per database basis so you can turn it off one at a time when you are
With Beta 2, they have found that the most common problems are
Direct access to system tables
Problems already present in user's objects that manifest themselves when
They recommend running DBCC's on 6.5 databases and fixing problems before
You can customize the upgrade in several ways:
Convert only selected databases
should only be used for testing
cannot replicate/sync data between 6.x/7
breaks cross database dependencies
Skip transfer of configuration options
Extend the process with supplemental scripts
before or after certain upgrade steps
"Professional stunt drivers on closed track only"
Upgrade files will be placed in the MSSQL7\Upgrade directory. There will be a
subdirectory for each upgrade performed: MSSQL7\UPGRADE\machinename and in that,
a subdirectory for each database: MSSQL7\UPGRADE\machinename\nnnDBNAME.
There will be some stored procs created in the 6.x database to suport
The files in the database directory include the scripts for table
definitions, stored procedures, views, triggers, defaults, rules and their
column bindings, user-defined data types, DRI, permissions, logins, users and
groups. Note that the login password for standard security will NOT be
exported. [Users at TechED requested that an option be provided for preserving
passwords.] Groups will be migrated to roles.
- Segments don't migrate, you will have to write some custom something if you
are using segments (other than the default ones).
- Text must be intact in syscomments
- If you have used sp_rename to rename an object, this change is not carried
back to syscomments and can cause problems. [I have always urged people not to
use this anyway.]
- Since master must be at 7.0 compatibility, the syntax used in any objects in
master must conform to 7.0 requirements.
- Computer name and local servername (@@servername) must be the same.
- 6.x tempdb must be at least 10MB
- Stored procs that modify system tables will not be transferred.
- System table schema changes between 6.x and 7. (Customers are urged not to
interrogate system tables directly in 7.0, many more covering functions will be
- Stored procs created from within stored procs will not be scripted as they
are not in syscomments. [Since one of the rules is that you can't issue a CREATE
PROC inside a stored procedure, I am at a loss as to how one would do this
- Tables created by sa for another user (e.g., CREATE TABLE user1.tab1) cannot
be scripted as user1 does not have create table privilege.
- Meaningless 6.x permissions (e.g., select permission on a stored proc) will
not be scripted.