Write Conflict in Access using SQL Server - sql-server

There are other question on this site that do go pretty close to what I'm about to ask, but mine is slightly different (if anyone knows of a question that exists that covers this problem exactly, then please guide me to it).
I have Windows Server 2003 which has SQL Server 2000 (I think) on it. On that SQL Server, I have 5 databases, four of which I currently use. Now, I also have a laptop(win xp pro) which has a copy of Enterprise Manager installed on it, and I maintain the SQL databases via the laptop, with no problems whatsoever (over the network inotherwords).
I also have on the laptop a 'frontend' in Access with an ODBC link, so that I can use forms in Access to view the contents of the tables in SQL. I have had no problems with this setup for several weeks (since I created the databases in fact).
However, the problem I have with Access, is that I AM able to alter the information in two of the databases, but NOT the other two. When I try to make changes in either of the two NON working ones, I get a "write conflict - this record has been changed by another user...", however I am THE ONLY user! I am using a SQL Server login, which I have to type in each time I open the Access frontend. What is going on here? I have read something about a timestamp field, but I don't understand why I might have to implement or indeed how to implement? This issue is driving me nuts!

Do the tables you are amending have correct primary keys in the 2 databases which aren't working? The reason I ask is that if Access is unable to determine the exact record you are updating (due to lack of a primary key or other unique field) it will often give the 'Write Conflict' error. As you've mentioned, people often suggest timestamp fields when this issue occurs also, but in my experience this only helps when dealing with Access and MySQL, rather than SQL Server.
Also, which version (and service pack) of Access are you using?

If you have any BIT fields, if you are doing an update on the table, you have to insert a value into the BIT fields, IE "0" into each field. They cannot be null.

Related

Migrating an Access 2002 Application to Access 2019 or SQL Server?

I have a series of Access 2002 "Front-end/Back-end" applications all related to each other. For example, application A has linked tables with application B and vice versa.
The applications are at a stage where daily compaction and repair is required due to the volume and high level of corruption. Moreover, to be able to make the applications work properly, I must make the changes in a virtual environment with Access 2002. I also need to reinstall "Access runtime 2010 - 32bits" and copy the Access files (.mde) on every workstation (Windows 10) every time I am making a change in the applications.
#Gustav This is a temporary option (6 months to 18 months) because the customer would like to go to a complete solution with SQL database. The studied solution is configurable and already has a SQL database schema.
I have already done the test to transfer forms, tables, queries and modules to Access 365 but I have errors in the VBA code. All business rules are coded in the VBA code. I also did the transfer of the tables to SQL Server 2017 but I'm afraid I will have to change a lot of VBA code because of the disuse of the DAO engine in the Access 365 front-end.
In fact, to be clearer, I wonder about the need to change the front end to Access knowing that it is a temporary solution.
Maybe, I should keep the software under respirator by eliminating data history in large tables. The time the client takes their decision. Find the "sweet spot" that would allow me to erase and continue to maintain it without having to worry about corruptions because I have a hard time seeing a substantial gain in the migration to the Access 365 front-end. What do you think?
I have already proposed to migrate applications and tables in the new version of to Access 2019 and even moving on SQL Server. However, for now, I must put the application on the respirator and continue the daily compaction until a decision is made.
I would like to know if there is a gain to migrate from Access 2002 to the Access 2019 version knowing the total limit of 2 GB Access. What are the major constraints he would have to migrate to a SQL database knowing that the VBA application and code uses the DAO method?
#Albert D. Kallal
I really like your answer. I should come to a decision soon.
However, I have 2 additional questions. Perhaps, could you guide me on the subject.
There are two things that have recently came to haunt the tranquility that reigned over these applications.
1- For a unknown reason, one of the applications that was part of the swarm of Access applications was blocked for some time with these errors 'Runtime 3027: Cannot update: Database or object is read only'. The problem is that some users have ignored this error and continue their tasks which caused a data shift.
I had to go back with backup copy because some tables were not updated.
Looking more closely at the error in the VBA code I noticed that it all came from the DAO Recordset.Edit method containing queries with multiples joins.
I managed to work around this problem by modifying the Edit method with DoCmd.RunSQL and by changing the query from a Select to an Update query.
However, the whole method worked perfectly well before.
Can you explain to me the cause of this error?
2- The original developer did not necessarily use best practices for the design of the application (no autonumber, some tables without primary key, no foreign keys) so I'm afraid that a new design will need to be done if I migrate to a SQL Server database. Or maybe to save time, as this solution is going to disappear in 18 months, I should only replicate the bad practices in SQL Server and pray that it does not cause any more glitches. What would be your professional approach?
Thank you
There may be none. Open the database in Access 2019/365 and save it in the 2007 (accdb) format, and check it out.
As for the distribution, you can make this fully automatic using a script and a shortcut. It is explained in full in my article:
Deploy and update a Microsoft Access application with one click
If you don't have an account, browse to the link: Read the full article.
Push hard to get a confirmation on the move of all the shared tables to an SQL Server backend.
A few things:
To update your mde or accDE front end? That is a simple copy to each workstation. You don’t need to re-install the runtime each time. There is no “special” connection between one particular application (mde/accde) that you deploy to each workstation.
In other words:
If you writing software in VB6, then you need to install the VB6 run time (but only one time). After that you can simply copy + deploy your application to each workstation.
If you writing software in say .net then again, you have to ensure the correct .net framework is installed on each computer. Once done, then again you can update your software by a simple copy to each workstation.
And the same goes for using the access run time. Once installed, then you can simply copy any mde/accDE to that workstation, double click on it and it will run. So the run time is not connected to any particular database you copy to the workstation. Once you have the runtime installed, then you can rather easy cook up some automatic update code for the front end to "check" some version number, and then copy down the new updated front end. There are quite a few ways to do this - even a simple batch file can often suffice here.
So in near all cases these days, you will have to do “one time” install of the required run-time and support libraries. This is the case for .net, older VB6 programs, or Access.
As for migration of the access table data to SQL server?
You should be able to simply migrate the table data to SQL server. Now link the application tables from the older access back end to SQL server.
At this point, 99% of your VBA and even DAO recordset code should work just fine.
There is no need (or even a good reason) to dump using DAO code you have – it should work as before with VERY few modifications.
About the only change is for code that does this:
Dim strSQL As String
Dim rst As DAO.Recordset
strSQL = "select * from tblCustomers where City = 'Edmonton'"
Set rst = CurrentDb.OpenRecordset(strSQL)
' above for SQL server becomes:
Set rst = CurrentDb.OpenRecordset(strSQL, dbOpenDynaset, dbSeeChanges)
And you can even migrate the tables with indexes, and table relationships intact by using the Sql Server Migration Assistant for Access (SSMA). You can find this fantastic tool here:
https://www.microsoft.com/en-us/download/details.aspx?id=54255
So, about 99% of existing forms and VBA code will work as before after you migrate the data to SQL server.

How to help QA team access the right database?

In the place I work, very often it happens that a developer and QA session goes like this:
(This is in reference to SQL Server 2005)
QA: I get Invalid object name 'customers'
DEV: huh? can u send me the exact SQL statement you used?
QA: select * from customers
DEV: hmm. (after some thinks) Are you sure you're using CUSTDB?
QA: yes
DEV: (after figuring out that QA was using CUSTDB_PRODUCTION) Please add "USE CUSTDB" and then tell me what you get with that SQL.
QA: Oh, sorry, I was using wrong DB.
The tab-text for the SQL window shows the information of which database the query is running on, but how do you ensure that QA follows this?
I will admit that I have made this mistake of using the wrong DB many times. I don't tend to read the text in the tab.
What are your experiences with this type of scenario? Have you found a way to help mitigate such a problem?
if your QA is using SSMS for testing you should try the window coloring options in SSMS Tools Pack free add-in for SSMS. this way you could immediately differentiate between servers.
if that's not an option don't allow QA to access production server at all. they shouldn't be able to anyway.
I think you need to formalise how QA will report an error.
You need to specify a set of information that they'll supply with every error report, including:
what they were doing (exactly)
their configuration (including the database!)
time/date (so you can match stuff in logs)
how to repeat it (if repeatable)
etc. You can act on that immediately, or log it in an incident tracking system and come back to it later (in which case the above is invaluable, otherwise it's all lost).
The above can be as simple as an email draft/template. But you need to be rigorous about this, otherwise (as you've discovered) you're going to go round in circles, perhaps without all the salient information you require.
If QA are allowed access to both live and dev databases, using SSMS, then there must be some level of accepted responsibility on their part and/or some level of training of them on your part.
They have been given a tool that allows them to ask questions of the data, but are asking the wrong questions, then complaining to you - if I was the DBA, I'd simply remove their access until they could demonstrate they knew what they where doing! I sympathise that that might not go down to well, but at least threatening to do it might make them think a little for themselves.
Think of this question as 'someone is doing something wrong'
There are 2 simple answers:
remove their ability to 'do something wrong'
train them to do it right
On the same note as Mladen Prajdic, you can colour code query windows in SQL2008 SSMS too.
Personally I use the fully qualified name in all queries (server.datatabase.owner.table - well I only use server if I'm deliberately using a linked server) because I move from database to database so much. If you specify the database in the queries to be run, they still work if connected to a differnt database on the same server or if you have a linked server. Have your QA adopt doing this as their standard if they are writing their own queries; if you are writing the test queries then you should be specifying the database name in the query not through a use statment.

SQL Server Object Organisation

I'm not sure if this is valid, however I have a bug bear with SQL Server, and that is that I cannot organise objects in to a group of objects.
Imagine I'm working on a new section of work in a large database and I perhaps have 15 objects that I will be regularly using. What I want to do is sort of "Favourite" them in to a folder so that I don't have to trawl through all objects in my databases.
I know I could organise objects by schema, however these objects aren't necessarily schema specific, they cross boundaries.
Has anyone come across a method for organising objects in to a favourites group? I know SQL Server Projects organise scripts, but I can't see that they can organises tables?
Thanks
You can't do that with the native tools (SQL Server Management Studio) but there's a workaround: create a new empty database with those 15 tables - just the schema, not the data. Then when you're writing T-SQL code, you can quickly drag and drop elements out of those tables into your code.
The downside is that changes made in the real database won't be reflected in your working database, but you can automate that with a script to pull out the objects you need and recreate them in your working database. You can run that as often as you like (like every X hours, or as a SQL Agent job that runs when your local dev server starts up) without losing data, since you won't be modifying the structure in your "favorites" database.
I know I'm really late to the party, but the question showed up on the right under "Related" and I was curious enough to look.
There is a free add-in for Management Studio that seems to do exactly what you're asking:
http://www.sqltreeo.com/wp/dowload-free-ssms-add-in-to-create-own-folder-for-database-objects/
There is also a $65 commercial add-in which you may want to try as well. I haven't tried either so I'm not sure how well they work or what the paid version offers over the free add-in (if anything).
http://www.skilledsoftware.com/
Also can't hurt to vote for this Connect item and add a comment describing your business use case. While you may find it discouraging that it's been closed as Won't Fix, that is not necessarily a permanent decision:
http://connect.microsoft.com/SQLServer/feedback/details/209340

MS Access Application - Convert data storage from Access to SQL Server

Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.

What are the limitations to SQL Server Compact? (Or - how does one choose a database to use on MS platforms?)

The application I want to build using MS Visual C# Express (I'm willing to upgrade to Standard if that becomes required) that needs a database.
I was all psyched about the SQL Server Compact - because I don't want the folks who would be installing my application on their computers to have to install the whole of SQL Server or something like that. I want this to be as easy as possible for the end user to install.
So I was all psyched until it seems that there are limitations to what I can do with the columns in my tables. I created a new database, created a table and when I went to create columns it seems that there isn't a "text" datatype - just something called "ntext" that seems to be limited to 255 characters. "int" seems to be limited to 4 (I wanted 11). And there doesn't seem to be an "auto_increment" feature.
Are these the real limitations I would have to live with? (Or is it because I'm using "Express" and not "Standard"). If these are the real limitations, what are my other database options that meet my requirements? (easy installation for user being the biggie - I'm assuming that my end user is just an average user of computers and if it's complicated would get frustrated with my application)
-Adeena
PS: I also want my database data to be encrypted to the end user. I don't want them to be able to access the database tables directly.
PPS. I did read: http://www.microsoft.com/Sqlserver/2005/en/us/compact.aspx and didn't see a discussion on these particular limitations
I'm not sure about encryption, but you'll probably find this link helpful:
http://msdn.microsoft.com/en-us/library/ms171955.aspx
As for the rest of it:
"Text" and "auto_increment" remind me of Access. SQL Server Compact is supposed to be upgrade compatible to the server editions of SQL Server, in that queries and tables used in your compact database should transfer to a full database without modification. With that in mind, you should first look at the SQL Server types and names rather than Access names: in this case namely varchar(max), bigint, and identity columns.
Unfortunately, you'll notice this fails with respect to varchar(max), because Compact Edition doesn't yet have the varchar(max) type. Hopefully they'll fix that soon. However, the ntext type you were looking at supports many more than 255 bytes: 230 in fact, which amounts to more than 500 million characters.
Finally, bigint uses 8 bytes for storage. You asked for 11. However, I think you may be confused here that the storage size indicates the number of decimal digits available. This is definitely NOT the case. 8 bytes of storage allows for values up to 264, which will accomodate many more than 11 digits. If you have that many items you probably want a server-class database anyway. If you really want to think in terms of digits, there is a numeric type provided as well.
A few, hopefully helpful comments:
1st - do not use SQLite unless you like having to have the entire database locked during writes (http://www.sqlite.org/faq.html#q6) and perhaps more importantly in a .Net application it is NOT thread safe or more to the point it must be recompiled to support threads (http://www.sqlite.org/faq.html#q6)
As an alternate for my current project I looked at Scimore DB (they have an embedded version with ADO.Net provider: http://www.scimore.com/products/embedded.aspx) but I needed to use LINQ To SQL as an O/RM so I had to use Sql Server CE.
The auto increment (if you are referring to automatic key incrementing) is what it always has been - example table:
-- Table Users
CREATE TABLE Tests (
Id **int IDENTITY(1,1) PRIMARY KEY NOT NULL,**
TestName nvarchar(100) NOT NULL,
TimeStamp datetime NOT NULL
)
GO
As far as the text size I think that was answered.
Here is a link to information on encryption from microsoft technet: (http://technet.microsoft.com/en-us/library/ms171955.aspx)
Hope this helps a bit....
Had to chime in on two factors:
I use Sql Compact a lot and its great for what it works for -- a single user, embedded, database, with a single file data store. It has all the SQL goodness and transactions. It hadles parallellism well enough for me. Notice that few of the naysayers on this page use the product regularly. Don't use it on a server, that's not what its for. Many of my customers don't even know the file is a "database", that is just an implementation issue.
You want to encrypt the data from your users -- presumably so they can only view it from your program. This simply isn't going to happen. If your program can decrypt the data, then you have to store the key somewhere, and a sufficently dedicated attacker will find it, period.
You may be able to hide the key well enough that the effort to recover it isn't worth the value of the information. Windows has some neat machine and user local encryption routines to help. But if your design has a strong requirement that a user never find data you have hidden on their computer (but your program will) you need to redesign -- that guarentee simply cannot be accomplished.
SQL CE is a puzzle to me. Did we really need yet another different SQL database platform? And it's the third in the last several years targeted at mobile platforms from MS ... I wouldn't have a lot of confidence that it will be the final one. It doesn't share much if any technology with SQL Server - it's a new one from scratch as far as I can tell.
I've tried it, and then been more successful with both SQLite and Codebase.
EDIT: Here is a list of the (many) differences.
ntext supports very large text data (see MSDN - this is for Compact 4.0, but the same applies to 3.5 for the data types you are mentioning).
int is a numeric data type, so the size of 4 means 4 bytes/32 bits of storage (–2,147,483,648 to 2,147,483,647). If you intend to store 11 bytes of data in a single column, use the varbinary type with a size of 11.
Automatically incrementing columns in the SQL Server world are done using the IDENTITY keyword. This causes the value of the column to be automatically determined by SQL Server when inserting data into a row, preventing collisions with any other rows.
You can also set a password or encrypt the database when creating it in SQL Compact to prevent users from directly accessing your application. See Securing Databases on MSDN.
All of the items you mention above are not really limitations, so much as they are understanding how to use SQL Server.
Having said that, there are some limitations to SQL Compact.
No support for NVARCHAR(MAX)
NTEXT works just fine for this
No support for VIEWs or PROCEDUREs
This is what I see as the primary limitation
I've used the various SQL Server Compact editions on a few occasions, but only ever as data capture repositories on mobile platforms - where it works well for syncing with a server database, and with that sort of scenario is undoubtedly the optional choice.
However if you need something to do more than that and act as a primary database to your application then I'd suggest SQLLite is probably the better option, it's completely solid, widely supported and found in all sorts of places (used on the iPhone for example) but is surprisingly capable (The Virtual Reality simulator OpenSim uses it as it's default database) and there are lots of others (including Microsoft).
I must also chime in here with VistaDB as an alternative to SQL CE.
VistaDB does support encryption (Blowfish), it also supports TEXT as well as NTEXT (including FTS indexes on them).
And yes the post above is correct in that you have to look at the SQL Server types to really match them up, VistaDB also uses the SQL Server types (we actually support more than SQL CE does; only missing XML).
To see other comparisons between VistaDB and SQL CE visit the comparison page. Also see the SO thread on Advantages of VistaDB for more information.
(Full disclosure - I am the owner of VistaDB so I may be biased)
According to this post (http://www.nelsonpires.com/web-development/microsoft-webmatrix-the-dawn-of-a-new-era/) it says that because it uses a database file, only one process can access it for every read/write and as a result it needs exclusive access to the file, also it is limited to 256 connections and the whole file will most likely have to be loaded in memory. So SQL server compact might not be good for your site when it grows.
There are constraints... Joel seems to have addressed the details. SQL CE is really geared for mobile development. Most of the "embedded" database solutions have similar constraints. Check out
SQLite
No TEXT field character limit
Auto increment only on INTEGER PRIMARY KEY column
Some third party encryption support
Esent
(unmanaged code isn't my forte, and I can't decipher the unmanaged docs)

Resources