Stealing my information back - sybase

TL;DR: My POS uses Sybase Advantage Database Server to store my sales data, and I'd like to access it, but I only have the backup files.
I own a small business with "advanced" POS software, which has the only copy of all my sales data ever. They have some backup scheme, but they're unwilling to divulge any details. There's also an automatic daily local backup routine, but because this is a POS and there are certain laws about deleting data, I am not allowed (nor do I have the software required) to restore from backup even to check that it works. I asked the support guy when the last time he had to restore from backup was, and he said "don't worry, we don't ever need it".
Naturally, I'm worried.
I'll note at this point that I am required by law to keep this data, and should I fail to do so for any reason I may personally face massive fines in the range of multiple millions. I'd like to avoid that.
Additionally to keeping the data, and verifying that the backups contain the data I so need to keep, I'd also like to create reports. The POS vendor claims that it can create any report I'd ever need, but every single time I've asked them about a report it either contained wrong data, crashed, exported unreadable files (to which their reply was that the files are fine, my [insert relevant file reader] is broken), or simply didn't exist (to which their reply is usually something like "you don't need that report anyway"). I asked about accessing a copy of the database myself, and they said they can't allow that. My only recourse is to pay them tens of thousands for developing and testing the report. What report do I want, you ask?
SELECT * FROM SALES
To create this simple report, I need to migrate my data from the Sybase Advantage Database Server backup files into a format I can use, e.g. a MySQL database, but all the migration tools I've found require access to a working database server.
How can I get my data out of these backups?

It's actually a complete Sybase iAnywhere database (which uses .db for the data and .log for the transaction log).
So you should be looking for Sybase iAnywhere or Sybase SQLAnywhere drivers and tools.
SAP / Sybase has a developer website here: http://scn.sap.com/community/sql-anywhere

Related

Compression not available on SQL Server Standard? Options?

So as they say, everyday is a school day. Today I learned that my workplace, runs SQL Server Standard edition, where I would have assumed Enterprise was in place. Although in reality shouldn't be surprised!
For some context, we have a very large database that houses our warehouse data. As the database has grown to a large size, it's causing issues with space on the server along with some application performance. So looking at it from my perspective I suggested we archive and purge the PROD database, to house only 18 months data in the PROD environment.
Wrote my scripts and tested them and all fine. I then went to compress the tables I had deleted data from, to find error messages that compression is not available in SQL Server Standard and requires Enterprise edition.
Wondering what my next steps are here? My assumption is that even though I am deleting a lot of data, we won't actually benefit in terms of performance, and space requisition until the tables get compressed.
Shrinking is something I guess I've always shy'd away from, many articles or posts here would advise not to use it.
Wondering, what sort of options do I have here?
Is my assumption correct, in that without compressing, we won't regain space from the trimmed database?
Moving this to resolved as opening query in specific DBA portion of the site

Rebuilding an unstable tool from scratch (Currently Access based - can go anywhere)

I have inherited a custom built tool that is poorly designed and unstable, and I have a great opportunity to rebuild it from scratch. This is an internal tool only that works almost entirely in Access, and its purpose is to provide higher detail on parts that cost the company over a certain dollar amount.
How it works:
1) The raw data (new part numbers) gets pulled nightly from the EDW via macros in Access.
2) The same macros then join two tables (part numbers from one, names from another). Any part under a certain dollar amount is removed, and the new data is appended to the existing Access database.
3) During the day employees can then open a custom Access form to add more details about the part. Different questions are asked depending on the part category.
4) The completed form is forwarded to management, and the information entered is retained in the Access database – it does not write back to the EDW.
5) Managers can also pull some basic reports from the database, based on overall costs.
The problems:
1) Currently everyone has to have Access installed on their work stations, and whenever there is an update the new database gets pushed to their stations. This is not considered an ideal situation by management or IT.
2) If anyone has left the tool open accidentally at the end of the day the database is locked out, therefore the macros cannot run and the tool cannot be updated with new part numbers.
3) If the tool cannot update for a few days in a row the database can become corrupted. We can restore from the last good backup, in the past this has resulted in the loss of multiple days of work.
Ideally we want to take the tool completely out of Access. I am building a SharePoint site that can host the tool, which (if I can get it right) will eliminate the need of Access on end-user stations with a database push. However the SharePoint form would need read/write ability.
The big question is: How do I build this?
I have a completely open path of possibilities – I can design it work any way I want, using any tools or platform I want, as long as it works. It does not have to update automatically, as I already run a number of SQL scripts at the start of my day and adding one more is inconsequential.
The resources I have at my immediate disposal are: SharePoint (with designer), Access, Toad, and SQL Server. The database can be hosted on a shared network drive.
I am a recent college graduate with basic SQL knowledge. I have about a year to produce a final product, but would like to get it up and running far sooner if possible.
Any advice on what direction to pursue would be very helpful, thank you.
Caveat: I've never worked with SQL Server, so I don't know all of it's capabilities (I'm an Oracle developer).
What I'd do in your situation is something like the following (although not necessarily in this exact order):
Get a SQL Server database set up to host your tables.
Create the tables etc
Migrate test data across (I'm assuming you have a dev/uat/test environment for your current system! If you haven't, make sure you set up at least a separate test environment to prod for your new db!)
Write stored procs to do the work for adding new parts, updating existing data, etc etc
Set up an automated job on the db (I'm assuming SQL Server can do this!) to do the overnight processing.
Create a separate db user with the necessary permissions to call the stored procedures
Get your frontend to call the stored procs with the relevant parameters using the db user you created in step 6 to connect to the db.
You'd also have to think about transaction control to try and mitigate the case where users go home at the end of the day without committing their work - Does the db handle the commits/rollbacks or does Sharepoint?
Once you've worked out everything in your test environment, it's then a case of creating the prod db, users and objects, and then working out the best way of migrating the prod data across.
Good luck.
Don't forget to get backups for the new db set up as well.

SQL Server backup only those stored procedures whose object definition is modified

I am having a scenario where I need to create a backup of database which contains huge data in GBs. Once the full backup is done I am trying to optimize it using partial backup or backing up only those SP's whose object definition is modified.
One way I can think of is comparing through Object definition date, say past 7 days.
Can you please let me know better way which I can achieve this?
You do not back up databases that way. You back up the data in the database first and foremost. Objects are all backed up, you can't choose not to back up one table either. You do a full back up on a schedule (like once a week) and then differential backups nightly and then transaction log backups roughly every 15 minutes. Frankly the fact that you are asking this question tells me your company needs to hire a dba to protect its data.
Next, stored procs shoudl be in source control like any other code. You can tell what the current version is the same way you tell the current version of any code. If you need to restore only one, you can do it from teh source control repository. This does require that you have procedures that do not permit developers to push code to other servers beyond dev and the build team or managers who do have the rights will only push from the source controlled version.
Before optimizing anything in backups, you should really know what are your Recovery Point Objective and Recovery Time Objective -- meaning basically how long your system can be down and how much data you can lose. That's what you should use then to plan your backups.

Storing binary files in sql server

I'm writing an mvc/sql server application that needs to associate documents (word, pdf, excel, etc) with records in the database (supporting sql server 2005). The consensus is it's best to keep the files in the file system and only save a path/reference to the file in the database. However, in my scenario, an audit trail is extremely important. We already have a framework in place to record audit information whenever a change is made in the system so it would be nice to use the database to store documents as well. If the documents were stored in their own table with a FK to the related record would performance become an issue? I'm aware of the potential problems with backups/restores but would db performance start to degrade at some point if the document tables became very large? If it makes any difference I would never expect this system to need to service anywhere near 100 concurrent requests, maybe tens of requests.
Storing the files as blob in database will increase the size of the db and will definitely affect the backups which you know and is true.
There are many things of consideration whether the db and code server are same.
Because it happens to be code server requests and gets data from db server and then from code server to client.
If the file sizes are too large I would say go for the file system and save file paths in db.
Else you can keep the files as blog in db, it will definitely be more secure, as well as safe from virus, etc.

Consideration DECS vs SSIS?

I need solution to pump data from Lotus Notes to SqlServer. Data will be transfered in 2 modes
Archive data transfer
Current data transfer
Availability of data in Sql is not critical, data is used for reports. Reports could be created daily, weekly or monthly.
I am considering to choose from one of those solutions: DESC and SSIS. Could You please give me some tips about prons and cons of both technologies. If You suggest something else it could be also taken into consideration.
DECS - Domino Enterprise Connection Services
SSIS - Sql Sever Integration Services
I've personally used XML frequently to get data out of Lotus Notes in a way that can be read easily by other systems. I'd suggest you take a look and see if that fits your needs. You can create views that emit XML or use NotesAgents or Java Servlets, all of which can be accessed using HTTP.
SSIS is a terrific tool for complex ETL tasks. You can even write C# code if you need to. There are lots of pre-written available data cleaning components already out there for you to download if you want. It can pretty much do anything you need to do. It does however have a fairly steep learning curve. SSIS comes free with SQL Server so that is a plus. A couple of things I really like about SSIS are the ability to log errors and the way it handles configuration so that moving the package from the dev environment to QA and Prod is easy once you have set it up.
We have also set up a meta data database to record a lot of information about our imports such as the start and stop time, when the file was recieved, the number of records processed, types of errors etc. This has really helped us in researching data issues and has helped us write some processes that are automatically stopped when the file exceeds the normal parameters by a set amount. This is handy if you normally recive a file with 2 million records and the file comes in one day with 1000 records. Much better than delting 2,000,000 potential customer records because you got a bad file. We also now have the ability to do reporting on files that were received but not processed or files that were expected but not received. This has tremendously improved our importing porcesses (we have hundreds of imports and exports in our system). If you are designing from sratch, you might want to take some time and think about what meta data you want to have and how it will help you over time.
Now depending on your situation at work, if there is a possibility that data will also be sent to the SQL Server database from sources other than Lotus Notes as well as the imports from Notes that you are developing for, I would suggest it might be worth your time to go ahead and start using SSIS as that is how the other imports are likely to be done. As a database person, I would prefer to have all the imports I support using the same technology.
I can't say anything about DECS as I have never used it.
Just a thought - but as Lotus Notes tends to behave a bit "different" than relational databases (or anything else), you might be safer going with a tool which comes out of the Notes world, versus a tool from the sql world.
(I have used DECS in the past (prior to Domino 8) and it has worked fine for pumping data out into a SQL Server database. I have not used SSIS).

Resources