Compression not available on SQL Server Standard? Options? - sql-server

So as they say, everyday is a school day. Today I learned that my workplace, runs SQL Server Standard edition, where I would have assumed Enterprise was in place. Although in reality shouldn't be surprised!
For some context, we have a very large database that houses our warehouse data. As the database has grown to a large size, it's causing issues with space on the server along with some application performance. So looking at it from my perspective I suggested we archive and purge the PROD database, to house only 18 months data in the PROD environment.
Wrote my scripts and tested them and all fine. I then went to compress the tables I had deleted data from, to find error messages that compression is not available in SQL Server Standard and requires Enterprise edition.
Wondering what my next steps are here? My assumption is that even though I am deleting a lot of data, we won't actually benefit in terms of performance, and space requisition until the tables get compressed.
Shrinking is something I guess I've always shy'd away from, many articles or posts here would advise not to use it.
Wondering, what sort of options do I have here?
Is my assumption correct, in that without compressing, we won't regain space from the trimmed database?

Moving this to resolved as opening query in specific DBA portion of the site

Related

Do re-indexing of database runs automatically with CRM app?

It's my first time using CRM as an app. I have been using SQL Server for quite a while now. Our company is experiencing issues with emails, thus having warnings in event viewer
Query execution time of 17.3 seconds exceeded the threshold of 10 seconds.'
for almost all select statements.
My main concern here is I am in doubt that the CRM job tool for reindexing is not running properly. Also, I have checked the reindex maintenance plan in the SQL Server, no configurations were made for reindex.
I hope someone can help me. Thanks!
In Dynamics 2016 On Premises you are free to create and maintain your own indexes. What is needed really depends on how your company is using the system. Sometimes recalculating statistics does the trick (as far as I remember this is part of the built-in maintenance jobs), sometimes you may even want to partition the database.
E-mail storage in MS Dynamics can quickly grow out of hand, because attachments (e.g. these colorful company footers) use to take up a lot of valuable database space. Decide what e-mail really needs to be tracked, if attachments and images can be stripped from them or if they can be stored in a separate file group. (To mention a few options.)
Poor SQL Server management is actually often the ground reason for performance issues on MS Dynamics. Your skills will be of much value!

How to explain risks of Access 2007 development vs. SQL Server

I have a client who wants to develop an application using Access 2007. For the stated short term purposes, Access 2007 fits their specification:
approx 30K master records
6 or fewer users
department file server
The issue is that the client is very technically naive and isn't at all aware of the trouble they might get into if the scope increases. The application will be storing master data that will be uploaded into an enterprise system and I fear that six months from now I'll be hearing any of the following issues:
we need to keep all of the historical data (suddenly we have 3M rows)
we need fine grained and airtight user level security
we keep getting corrupt data records
our database wasn't backed up for three months (because a user kept it open)
I've done a few small Access databases but I'm a SQL server dev by trade and I know how to use it to solve most any problem. I don't know if my client should be worried about their choice of technology - and if they should, I'm not 100% sure how best to communicate the risks to them.
I fear that six months from now I'll be hearing any of the following issues:
we need to keep all of the historical data (suddenly we have 3M
rows)
Three million rows isn't necessarily a deal-breaker for a Jet/ACE data store. Depends on amount of data in each of those rows.
we need fine grained and airtight user level security
This is a compelling reason to move data storage to client-server db.
we keep getting corrupt data records
That should almost never happen with a proper Access implementation, contrary to claims by Access bigots. It will happen if you're running across an unreliable network. But, if that's your client's situation, you should either fix the network problems or ditch Access for data storage.
our database wasn't backed up for three months (because a user kept
it open)
You can build on Arvin Meyer's KickEmOff approach. But with <= 6 users currently, it might be easier to deal with the situation without code for now. Just ask them to close out long enough for the backup. You could have your automated backup routine create a notice when its attempt fails, so this shouldn't have to be a constant thing.
In any case, suggest you design the current application so that an eventual migration to SQL Server will be less troublesome. Avoid Access-specific features: hyperlink data type; lookup fields; multi-value fields; attachment fields; and so forth. Since you're experienced with SQL Server, it should be fairly easy to create a test SQL Server database and link a copy of your Access front-end to it. Test periodically as you develop the Access front-end. Then you look like a hero when the client recognizes a need to move the data storage to SQL Server.
I'm in a mixed Sql/Access dev shop and understand your concerns, but the sheer usability of Access often wins out for users. Where we have mission critical data and need to use Access we simply used Linked tables - best of both worlds, Sql handles Security, Backups etc and Access provides the front end.
To me, the obvious answer is to develop an Access front end to an Access back end for the initial implementation, but doing the development with upsizing the back end to SQL Server in mind.
That means just applying commonsense to what you do, as #HansUp suggests (i.e., not using Access-specific functionality), and designing your data retrieval so that it will work well with a server back end.
If, on the other hand, either the increased amount of data or the security issues are actually not just remote possibilities but likely to become issues during the lifetime of the app, I'd go with a SQL Server back end from the beginning. But your description of the situation really doesn't sound like that's the case at all.
Certainly the corruption and backup concerns are completely misplaced. Proper maintenance and backup has to be in place, and the operating environment has to be stable, but all of that applies to any database engine, not just to Jet/ACE.
Explain to your client that you will have to charge much more money to create, implement, maintain, repair and later upsize the application. Explain that they will not save money in the long run and that they will be better off if they go ahead and allow you to properly prepare now. That being said, I agree with #HansUp suggestions. You can give the customer what they want and still prepare for the likely eventualities. Think of it as job security.
There are Price and GUI advantages to using Access over SQL that for the non-technical people are really attractive. I think given your scenario then maybe the "customer" is right - aren't they always!
However, your 4 "following issues" really answer your own question.
If your user is technically naive then there is not much point in using technical language. If at all possible when l speak to users the language and terms I used are the same my users understand. Also compliment your users when possible it makes them feel good and make you look good in their eyes. Here's some suggested ideas.
Using Access 2007 is an excellent idea, easy to develop with and change to met your needs. However there are a number of very strong technical reasons for using another free tool, namely SQL express to store the data.
Why use SQL express?
Its free !
Security of the data will be a very high priority (even if client has not mentioned this use this as a reason). Point out how easy it would be to steal all the data from Access compared to SQL server. See this book for excellent detail regarding Access security. The user level security for SQL server is much simpler and easier with SQL server, and will cost less money to implement, as well as being more secure.
Backing up of data. In order to back up the access database no one can be using the database or even connected to the database. With SQL server can back it up at any time. Less down time or in other words greater productivity using this other FREE tool.
Data corruption. One issue with Access database, is corruption of the database. What does this mean? It is possible to lose up to a days worth of work, with SQL server this issue is much very much less likely to occur. There are even situations where it is not possible to recover the database. Hence this loss of productivity can be minimised if using SQL server.
When this tool gains greater recognition and other departments wish to use it, as no doubt it will. Moving to a larger enterprise database system will be much easier and less costly to develop, if you use SQL server express as the data store.
The above are just suggestions, based on the assumption you user is wishing to expend as little money as possible, and the limitations / resources you put in your posting.
I also appreciate that not every one will agree with what l have put in the suggestions above. They are not meant as detailed technical points, more as suggested ways of persuading a technically naive client to consider using SQL server express as the back end db for an Access db used for a departmental application

What database to use for big data storage and manipulation?

I have to make a decision of which database server to use for my next project, but the simple decision to use MySQL like almost all the projects I did is harder now, because I expect very much records.
The database will store a user list, some other irrelevant tables, and the last one, some user-collected data. Let's say, if I have 6000 users responding to a quiz about each other. Simple math shows that from those users, if each one completes the quiz about everyone (and in my project that is 99% sure that will happen) I'll end up with 35.99million records(they will exclude themselves and in this particular situation the operation is 6000*5999). Unfortunately 6000 maybe is a small number, the real one growing day by day.
What to choose? MySQL and maybe if things go well and the project grows to expand it in a cluster? PostgreSQL, MSSQL? Oracle?
I've read about all of them, each one has it's pros and cons, but still don't know what to choose. The advantage of MySQL and PostgreSQL is of course, the starting price of $0 which is pretty nice in a usual self-funded startup.
Any opinions, pieces of advice? If you encountered this situation in your experience as developers, I'd love to hear from you.
These days, free isn't something that differenciates between databases any more. Both Oracle and SQL Server have free versions, but the limitations is resources - 4 GB database, RAM & single CPU utilization. Millions of records is not a concern - it's what datatypes you're using.
I saw the OPs comment about not liking MS software - that's your prerogative, but using the free versions of either Oracle or SQL Server do benefit from seamless transition to upscale versions of the respective database.
Personally, my choice would be either Oracle or SQL Server because of IMHO, real feature considerations like hierarchical query support, subquery factoring/CTE, packages (long before I get concerned with functions/procedures), full text searching, xml support, etc.
MySQL will handle 35 million records no problem. Worry about scalability when you get there. You can easily add raid hard disks backing your database tables, and if you really start getting big you can get a compellant SAN that will scream... Don't worry about the DB engine as much as the underlying hardware.. MySQL rocks for us with millions of records.
I've had no problems handling tables as large as 36,000,000 rows on MySQL and Oracle.
Just be sure that you index the proper columns, run EXPLAINs for your queries, and maintain proper design principles.
Most of the truly large scale web properties use a distributed key-value store. That said, 35 million is large, but not that large. With most modern databases, your main two scaling worries should be throughput and what happens when no single box can contain your entire database anymore. And both of these problems can be solved to some degree for any database you choose to use. (Caching, replication, sharding, etc.)
Use MySQL until you can't anymore. At that point, you ought to be rolling in dough anyways and you now have a very desirable problem.
Use MySQL as it's free and you have experience with it.
Besides in my opinion it matters more on how you design the tables than which database you use.
35 million records can be easily handled by MS SQL Server (assuming proper database design, indices, etc.). You can start with the free SQL Server Express edition and later, if you need, you can upgrade to the full version which supports clustering, etc.
SQL Server Express does have some limitations - single CPU, 1 GB memory, max 4 GB database size and a few other things. I'm not sure how quickly these limitations will become a problem but you can always move to the full version when you run into them.
MySQL(i) & Postgre
0$ of costs
large community
many tutorials
well documentated
MSSQL
You can get "money" from MS if you promote that you are using MSSQL (secret information from some companies I worked for)
MS tools work very well
Complete tool set from C# IDE over .NET lib to Windows Server 2003
Oracle
Professional and commercial provider
Used by many large companies (I also heard about Blizzard (World of Warcraft) using Oracle)
- expensive
The final decision depends on the very special requirements of your project.
Make yourself a quick list of things , that ARE IMPORTANT for your project (e.g. quick performed queries) and look up which Database pros are matching the most to your requirements.
Everything is about design. SQL Database are some kind of cars, you just have to know which component has to be placed here and which there.
Make a clear design and you won't struggle with any of them.
May be you can test Firebird
Blog post about big Firebird database here
MySQL licence is here (not allways free).
Postgresql and Firebird are free.
First of all, don't think about performance. Premature optimization being the root of all evil and all that. You can always throw more hardware and/or tuning at it later.
All of the mentioned should perform nicely if tuned/maintained correctly. I'd focus on manageability and familiarity. IMHO open source databases excels on manageability (perhaps not the best GUIs, but the CLI has been my home for a long long time).
And if the database becomes the bottleneck, why limit yourself to those choices? How about a key-value distributed database? Or perhaps serialize data directly to disk? Storing data outside of a RDBMS, while often frowned upon, might be the correct path. Or simply use the common route of denormalization.
Always remember not to optimize prematurely.
As far as opinions go (since you specifically asked for it) I favor open source databases, specifically PostgreSQL. It's rock solid, fast and very well-featured. And even with (relatively) large datasets it has performed superbly on mediocre hardware (some tuning involved, of course, but you can't skip that step no matter which db you end up choosing).

How would you migrate hundreds of MS Access databases to a central service?

We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them.
The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion.
We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out.
UPDATE:
I just found this, the sql server migration assitant, it might be worth a look:
http://www.microsoft.com/sql/solutions/migration/default.mspx
Update: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. "Owners" were assigned based the location &/or data in the database. If the database was in "S:\quality\test_dept" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up).
Upsizing an Access application is no magic bullet. It may be that some things will be faster, but some types of operations will be real dogs. That means that an upsized app has to be tested thoroughly and performance bottlenecks addressed, usually by moving the data retrieval logic server-side (views, stored procedures, passthrough queries).
It's not really an answer to the question, though.
I don't think there is any automated answer to the problem. Indeed, I'd say this is a people problem and not a programming problem at all. Somebody has to survey the network and determine ownership of all the Access databases and then interview the users to find out what's in use and what's not. Then each app should be evaluated as to whether or not it should be folded into an Enterprise-wide data store/app, or whether its original implementation as a small app for a few users was the better approach.
That's not the answer you want to hear, but it's the right answer precisely because it's a people/management problem, not a programming task.
Oracle has a migration workbench to port MS Access systems to Oracle Application Express, which would be worth investigating.
http://apex.oracle.com
So? Dedicate a server to your Access databases.
Now you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
This is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS.
And now you have to force the users onto your server.
Well, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore.
Also, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server).
Easy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner.
Further to David Fenton's comments
Your administrative rule will be something like this:
If the data that is in the database is just being used by one user, for their own work (alone), then they can keep it in their own network share.
If the data that is in the database is for being used by more than one person (even if it is only two), then that database must go on a central server and go under IT's management (backups, schema changes, interfaces, etc.). This is because, someone experienced needs to coordinate the whole show or we will risk the time/resources of the next guy down the line.

When is it time to change database backends?

Is there a general rule of thumb to follow when storing web application data to know what database backend should be used? Is the number of hits per day, number of rows of data, or other metrics that I should consider when choosing?
My initial idea is that the order for this would look something like the following (but not necessarily, which is why I'm asking the question).
Flat Files
BDB
SQLite
MySQL
PostgreSQL
SQL Server
Oracle
It's not quite that easy. The only general rule of thumb is that you should look for another solution when the current one can't keep up anymore. That could include using different software (not necessarily in any globally fixed order), hardware or architecture.
You will probably get a lot more benefit out of caching data using something like memcached than switching to another random storage backend.
If you think you are going to ever need one of the heavyweights (SqlServer, Oracle), you should start with one of those at the beginning. Data migrations are extremely difficult. In the long run it will cost you less to just start at the top and stay there.
I think you're being overly specific in your rankings. You can pretty much start with flat files and the like for very small data sets, go up to something like DBM for slightly bigger ones that don't require SQL-like syntax, and go to some kind of SQL database after that.
But who wants to do all that rewriting? If the application will benefit from access to joins, stored procedures, triggers, foreign key validation, and the like--just use a SQL database regardless of the dataset size.
Which one should depend more on the client's existing installations and what DBA skills are available than on the amount of data you're holding.
In other words, the size of your database is far from the only consideration, and maybe not the most important one.
There is no blanket answer to this, but ALMOST always, using flat files is not a good idea. You have to parse through them (i suppose) and they do not scale well. Starting with a proper database, like Oracle or SQL Server (or MySQL, Postgres if you are looking for free options) is a good idea. For very little overhead, you will save yourself a lot of effort and headache later on. They also allow you to structure your data in a non-stupid fashion, leaving you free to think of WHAT you will do with the data rather than HOW you will be getting it in/out.
It really depends on your data, and how you intend to use it. At one of my previous positions, we used Postgres due to the native geo-location and timezone extensions which existed because it allowed us to manage our data using polygonal datatypes. For us, we needed to do that, and we also wanted to use stored procedures, views and the like.
Now, another place I worked at used MySQL simply because the data was normalized, standard row by row data.
SQL Server, for a long time, had a 4gb database limit (see SQL Server 2000), but despite that limitation it remains a very stable platform for small to medium applications for which the old data is purged.
Now, from working with Oracle and SQL Server 05/08, all I can tell you is that if you want the creme of the crop for stability, scalability and flexibility, then these two are your best bet. For enterprise applications, I strongly recommend them (merely because that's what we use where I work now).
Other things to consider:
Language integration (ASP.NET session storage, role management, etc.)
Query types (Select, Update, Delete) [Although this is more of a schema design issue, not a DBMS issue)
Data storage requirements
Your application's utilization of the database is the most critical ones. Mainly what queries are used most often (SELECT, INSERT or UPDATE)?
Say if you use SQLite, it is gears for smaller application but for "web" application you might a bigger one like MySQL or SQL Server.
The way you write scripts and your web application platforms also matters. If you're developing on a Microsoft platform, then SQL Server is a better alternative.
Typically, I go with what is commonly accepted by whichever framework I am using. So, if I'm doing .NET => SQL Server, Python (via Django or Pylons) => MySQL or SQLite.
I almost never use flat files though.
There is more to choosing an RDBMS solution that just "back end horsepower". The ability to have commitment control, for example, so you can roll back a failed transaction is one. reason.
Unless you are in the megatransaction rate application, most database engines would be adequate - so it becomes a question of how much you want to pay for the software, whether it runs on the hardware and operating system environment you want, and what expertise you have in managing that software.
That progression sounds painful. If you're going to include MS products (especially the for-pay SQL Server) in there anywhere, you may as well use the whole stack, since you only have to pay for the last of these:
SQL Server Compact -> SQL Server Express -> SQL Server Enterprise (clustered).
If you target your app at SQL Server Compact initially, all your SQL code is guaranteed to scale up to the next version without modification. If you get bigger than SQL Server Enterprise, then congratulations. That's what they call a good problem to have.
Also: go back and check the SO podcasts. I believe they talked about this briefly.
This question depends on your situation really.
If you have control over the server you're deploying to and you can install whatever services you need, then the time to install a MySql or MSSQL Express server and code against an existing database framework VERSUS coding against flat file structure is not worth the effort of considering.
What about FireBird? Where would that fit into that list?
And lets not forget the requirements that the "customer" of your solution must also have in place. If your writing a commercial application for a small companies, then Oracle might not be a good choice... but if your writing a customized solution for a large enterprise which must share data among multiple campuses, and has a good sized IT department then the decision of Oracle vs Sql Server would come down to what does the customer most likely already have deployed.
Data migration nowdays isn't that bad since we have those great tools from Embarcadero, so I would instead let the customer needs drive the decision.
If you have the option SQL Server is a good choice from the word go, predominantly because you have access to solid procedures and functions and the database backup facilities are totally reliable. Wrapping up as much as your logic as you can inside the database itself (rather than in whatever language you are using) helps security and performance - indeed there's an good argument to be made for always using procedures for insert/update logic as these make you invulnerable to injection attacks.
If I have the choice the only time I'd consider MySQL in preference is with a large, fairly simple, database predominantly used for read access. This isn't to decry MySQL which has improved markedly of late and I happily use if I don't have the choice, but for more complex systems with update/insert activity MSSQL is generally the superior option.
I think your list is subjective but I will play your game.
Flat Files
BDB
SQLite
MySQL
PostgreSQL
SQL Server
Oracle
Teradata

Resources