Actual file size exceeds maxsize - sql-server

I found out that one of our databases has a file that is larger than the defined max. size, see picture below.
Never seen this before and I cannot find similar cases on the internet.
Database seems to be working fine, but I think this needs fixing.
We're running SQL Server 2012 (SP3-CU10) (yes, I know... they're working on it)
Any ideas?

Related

Backing up old clusters AFTER upgrading Postgresql

So, I was careless and upgraded Ubuntu from 18 to 20 -- thus postgresql from 10 to 12 -- WITHOUT making backups of my postgresql-10 cluster. Now that I'm looking into upgrading the cluster to work with 12, I'm realizing that was a mistake. Is there a way to back them up before attempting to upgrade them, now that postgres itself is already upgraded?
I could just copy the whole data folder and zip it up somewhere, but (a) that'd be a lot of disk space, and (b) I definitely don't yet understand postgres well enough to restore from those files.
(The last annoying thing here, which maybe deserves its own question, is that my pg10 data directory is on an external drive, which I'd like to keep using. Even if I can solve my backup problem, I'm not sure what the "easiest" way to do this is...)
EDIT: Actually I think my problem is a little different than I thought, and the postgres backup tools might still work for me. I will report back!
There is no supported way to downgrade PostgreSQL.
The best you can do is to take a plain format pg_dump and keep editing the file until you succed to load it into v10. It halps to split the dump into the pre-data, data and post-data sections.
It turns out I had two versions of Postgres installed simultaneously, so I was able to backup with a simple 'pg_dumpall' before upgrading the clusters. This manpage and this blogpost were both helpful in sorting things out.
Because I had a lot of onboard storage free I let pg_upgradecluster populate the default data folder, and then I copied it all over to my external, and edited the conf file to point back to that. (And then, yes, I made a backup of the upgraded cluster.)

File header changed due to different DB server?

I have a database (MS SQL 2019) where the customer saves files to certain records (Item images) unfortunately the file names haven't been saved with the data, so the file extension is unknown. To recover those files when needed, we wrote a detector that determines the extensions based on the header.
Now the strange thing is: We have a backup of the production server on our dev environment and the headers for like 85% of the files return different bytes than the the ones on the production server. But the kicker is: if I export the data into a file anyway and give both files the same extension (jpg) they both seem to be fine. - No corrupted data.
We built the detector based on this list: https://en.wikipedia.org/wiki/List_of_file_signatures
The header on the production server is not on that list (its A0 B9 A0 22) and I'm really curious what is going on. I could just add the header but i want to know how this can even happen.
Thanks for your input
EDIT: apparently, it wasn't the data that changed, but the interpretation layer based on the server configuration. I still haven't found a proper explanation for the phenomenon. But I tested the records with management studio now, and the DB records HAVE the same value. Sorry to bother you.
I should've spent more time analyzing the issue. The database was restored unaltered. - Its the interpretation layer that does something odd due to different environments.

How to test changes to a cube?

There is a process in our company built by a long-departed dev that pulls data from a cube stored in a MS SQL Analysis Server (which I typically access via Management Studio). The overall process almost never fails and is refreshed several times a day. However, there appear to be some bugs in the calculations that has been handed to me to investigate and fix.
Unfortunately, I knew nothing about cubes when this was handed to me, was not part of the original development process, and generic web tutorials don't seem to quite apply to whatever I'm looking at. On the plus side, trial and error has taught me enough that I can ask a few questions.
The bug is definitely in the calculations. But I obviously don't want to test in Production and I also don't want to make changes without a proper backup (that I know how to revert).
Is there a way to export the whole Analysis db to a .SLN file and open it in VS?
Should I instead use Script Cube as->Create to, change the cube name, and execute to make a copy?
If I'm later asked to add new dimensions or need to edit the data source, what's the best way to do this?
Any other tips?
You can create a Visual Studio project based on an existing cube. This will get everything in a project so you can investigate the calculations, make the necessary changes, deploy to a test environment, and check everything out before deploying to production. Don't forget to check the project into source control ;-)

Getting data from mdb database file in my Windows program

I have for some time helped a customer to export mdb table data to csv files (and then to further process these csv files). I have used Ubuntu, so mdbtools (mdb viewer) has been available to me. Now the customer wants me to automate the work I do in the form of a Windows program. I have run into two problems:
After some hours, I still haven't found a free tool on Windows that can export my table data in a way that I can incorporate in a program/script. Jackcess (jackcess.sourceforge.net) looks promising, but when running the downloaded jar a totally unrelated Nokia Suite program pops up...
I have managed to open two of the tables in a python program by using the pyodbc module, but all the other tables fail to open because of "no read permissions". Until now I thought that there were no access restrictions on the database, because mdb viewer on Ubuntu opens all tables without any fuzz. There is no other file available to be, just the mdb file. One possibility might be that this is not a permissions problem at all, but a problem with special characters in column names. All the tables that I cannot open have at least one column name with a national character, whereas the 2 two tables I can open do not. I tried to use square brackets in the SQL select called from python, like so:
SQL = 'SELECT [colname] from SomeTable;'
but it makes no difference. I cannot fetch data from the columns that do not contain national characters either (except from the 2 two tables that do work).
If it indeed is a permission problem, any solution must also be possible for my program to perform, there must not be any manual steps.
Edit: The developer of the program that produces the mdb files has confirmed that there is no restrictions for any tables. So, the error message "no read permissions" is misleading. I will instead focus on getting around what I presume is a problem with national characters in column names. I will start with the JSDB approach suggested below. Thanks everyone!
Edit 2: I made a discovery that I feel is important: All tables that I can open using pyodbc have Owner=Admin whereas all tables that I cannot open have no owner at all (empty string it seems, "Owner=").
Edit 3: I gave JDBC a shot. Same error again, as one could expect given the finding in Edit 2. Apparently the problem to solve is the table ownership (although MDB Viewer under Linux doesn't seem to care about that...). Since the creator of the files says he didn't introduce any permission settings, I guess the strange table ownership could be the result of using new programs (like 2010) to read data produced in a old program (like sometime in the 90s), or were introduced during some migration of the old program. Any ideas on how to solve it?
You might be able to use VBScript. VBScript is usually used in ASP files for web pages, but can be used stand alone as a Windows program as well.
VBScript is free as it's code you write in Notepad.
Others may come up with better answers for you. Good luck.

What does xp_qv do in SQL Server?

Last night one of our SQL servers developed some major problems and after a colleague stopped, started, and all the usual things it started checking and rebuilding databases and is now running an extended stored procedure called "xp_qv".
The internet seems to be very short of information on what this procedure does or anythign like that so I was hoping somebody here might be able to help.
I should add that I assume it is meant to be running so the question isn't "Can I stop it" or anything like that, its just curiosity in what it is doing in the hope that it will help determine how long before things are usable again...
This is the only information I could find..
xp_qv, hosted in xpsqlbot.dll is a
wrapper around functionality in
sqlboot.dll, which returns information
about the SKU type, licensing etc It
is not documented that is why you can
not find a reference.

Resources