I have 2 source folders in my project:
src/main/resources/sql/oracle
src/main/resources/sql/sqlserver
They both have a file called mh1.sql.
The project I'm working on used to support only oracle database, so it just use ClassPathResource("mh1.sql") to load the sql file directly, now I need to support different kinds of database, and switch to the correct sql file according to the database type we're using. So, is there any good way to go? without any big impact on the old project. any rough ideas?
BTW, I find that after compilation, I can only find one mh1.sql under bin folder, I'm a new guy in using Eclipse, and I'm curious to know if it's possible to output these 2 folder oracle and sqlserver to the bin folder and each contains its own mh1.sql file?
As for your second question without knowing your exact Eclipse project settings it's of course close to impossible to tell why you're not seeing the oracle and sqlserver subfolders in your bin folder. However, it should be obvious that this being fixed is a prerequisite for your first problem.
Have a look at the ClassPathResource docs, they tell you that you can/should provide a path to your resource rather than just the name. Hence, you can use ClassPathResource("sql/oracle/mh1.sql").
Having said all that you might also just dump the two files in src/main/resources/sql/ (omitting the subfolders) and give them unique names: ora-mh1.sql and mssql-mh1.sql.
Related
net project as well as a setup project. I also have it so that during installation it asks the users to enter a file location to store their database. the plan is to have an empty .mdf file, with all the tables setup, copied into that folder and I store the folder path in a config file.
this is mainly because I am planning on having multiple separate applications that all need the ability to access the same database. I have it storing the folder path in my config file the only thing I'm having trouble with is
storing the template files I don't know if i should do this in the setup project or main project
how to copy said template files into a new folder
so far I have been unable to find a solution so any help is appreciated
Well here is what I do this in a few of my projects - something that has proven reliable enough for me over the years (which you may or may want to do as well):
I have the program itself create the database files in an initialization routine. First however, it creates the sub folders in which the database files will be stored, if they don't already exist.
To do this, the program just checks if the folder exists and if the database file exists and if they do not, it creates them on the spot:
If Directory.Exists(gSQLDatabasePathName) Then
Else
Directory.CreateDirectory(gSQLDatabasePathName)
End If
If File.Exists(gSQLiteFullDatabaseName) Then
Else
...
I also have the program do some other stuff in the initialization routine, like creating an encryption key to be used when storing / retrieving the data - but that may be more than you need (also, for full disclosure, this has some rare issues that I haven't been able to pin down).
Here too are some addition considerations:
I appreciate you have said that you want to give the user the choice of where to store their database files. However, I would suggest storing them in the standard locations
Where is the correct place to store my application specific data?
and only allowing the users to move them if the really need to (for example if the database needs to be shared over the network) as it will make the support of your app harder if every user has their data stored in different places.
I have found letting the user see in their options/settings windows where their database is stored is a good idea.
Also to encourage them to back those files /directories up.
Also to create automatic backups of several generations for the user.
Hope this helps.
I'm currently not having the option to check this question since I do not have 2 computers running Anylogic.
My question is if you share your anylogic .Alp file with somebody, is he able to open the model.
Or should you also need to share the things in your model file when you save it(3d map, databasemap and pictures)
Or do you even need more since when you save an anylogic file you also need to name the Java package.
I'm asking this because my computer is running on his end and I need to know what I've to put in the cloud when my computer will crash and will not be able to use it.
Im currently have put the map from the model map in the cloud and the database excel files. (But no file about the Java package cause I could not find it)
You only need the .alp file itself and all items that are in the model folder where the .alp sits. This can include:
images
database subfolder
lib subfolder (holding any required jar files)
logs subfolder
outputs subfolder
So if you simply zip the entire model folder and send it over, the other person unzips it AND IF he/she uses the same AnyLogic version, all will work.
If it is an older version of AnyLogic on the other end, it may not work (but also it may, it depends on the version difference).
Also, if you created the model with the Professional version and you used some elements that are not accessible in the free PLE version and if the other person only uses the PLE, it will also not work.
If you really just need the other person to run the model, consider compiling it or using the AnyLogic cloud.
Our current system database system is a clipper DOS application. The database inside its folder is fragmented/divided into many parts. I want to decrypt the database so that I will have only one database in all and avoid reshuffling of data. I'll attached the file folder Screenshot.. the database is on .DBF format
VScreenshot of files
Often you can decompile the CLIPPER exe file to source code and work from the .prg I've done it many times. The program to use is called WALKYRIE.
In Clipper and Fox Pro for DOS .dbf file is a simple table file.
If You want to use as data base with many tables in one unit.
You can import these tables in MS SQL data base and/or part of a MS Access database.
I see that you got several answers. Most are partially right. Let's address these one at a time:
All those files essentially comprise the "database" for the application you're using. They could be used by other applications as well. Besides having a lot of files, what is the problem you're trying to solve?
People mentioned indexes. You can generally ignore these. There are there primarily to make access to the data files faster. Any properly written clipper application will recreate these if they're missing or corrupted. You could test this by renaming one, running the app, and seeing what happens. If it doesn't recreate it you can name it back. Not replacing missing index files would be unusual behavior.
The DBF file format is binary, but barely. Most of what's in a DBF is text and is readable with an editor. But there's no reason to do so - I'm sure there are several free DBF utilities out there to to read DBF files. Getting the structure of the files could be very helpful.
Getting the data out of the files would also be fairly simple with a utility. If you look up the DBF format you could even write one fairly easily in Clipper, any other language that uses DBF files, or in something like Python. Any language that can open and write files, really. It's not hard - any competent developer could do this in a matter of hours. Must less if you're using Clipper or another language that natively reads DBX files.
Most people create dBase/Clipper programs with relational data, like SQL Server. Where SQL Server has tables that relate to each other dBase/Clipper has a file for each "table." This isn't a requirement, but it was almost certainly done this way.
Given that, if you get the table structures through a utility or by reading the headers in an editor (don't save them from an editor!) you could quite likely recreate the database schema (i.e. the map of the data). Once you have that it's fairly trivial to get the data into another type of database (SQL Sever, Access, or whatever you like to use.) If non of the files are too large it's conceivable to put all the files into Excel sheets. It really depends on what you want to do with it.
As others have said, you may be able to get the code by Valkyrie. Some people have used it very successfully. I don't know where you get it and I've never used it. Why do you not have the code? If this is a commercial application you likely should not have it. If it's a custom app who ever wrote it or paid to have it written should have the code.
Again, it's not clear to me what problem you're trying to solve. But there are many options for doing something with those DBF files. Fortunately they are one of the easier to read data formats you could be working with.
Let me know if you have any questions. Apologies for the typos that are no doubt scattered throughout this reply.
You sort of can get an idea of how they relate to each other by opening the index files they use (.NTX files). If you have the DBU utility (executable) around, you can open the DBF and load the index (NTX). LibreOffice Calc is also able to open DBFs (haven't tested .NTX).
If you open the .NTX on a text editor you will see the indexes in the beginning.
I open with Access, but I can save the data using a PrintFill Program.
Before I begin, I would like to express my appreciation for all of the insight I've gained on stackoverflow and everyone who contributes. I have a general question about managing large numbers of files. I'm trying to determine my options, if any. Here it goes.
Currently, I have a large number of files and I'm on Windows 7. What I've been doing is categorizing the files by copying them into folders based on what needs to be processed together. So, I have one set that contains the files by date (for long term storage) and another that contains the copies by category (for processing and calculations). Of course this doubles my data each time. Now I'm having to create more than one set of categories; 3 copies to be exact. This is quadrupling my data.
For the processing side of things, the data ends up in excel. Originally, all the data was brough into excel. Then all organization and filtering was performed in excel. This was time consuming and not easily maintainable over the long term. Later the work load was shifted to the file system itself, which lightened the work in excel.
The long and short of it is that this is an extremely inefficient use of disk space. What would be a better way of handling this?
Things that have come to mind:
Overlapping Folders
Is there a way to create a folder that only holds the addresses of a file, rather than copying the file. This way I could have two folders reference the same file.
To my understanding, a folder is a file listing the memory addresses of the files inside of it, but on Windows a file can only be contained in one folder.
Microsoft SQL Server
Not sure what could be done here.
Symbolic Links
I'm not an administrator, so I cannot execute the mklink command.
Also, I'm uncertain about any performance issues with this.
A Junction
Apparently not allowed for individual files, only folders in windows.
Search folders (*.search-ms)
Maybe I'm missing something, but to my knowledge there is no way to specify individual files to be listed.
Hashing the files
Creating hash tags for all the files, would allow for the files to be stored once. But then I have no idea how I would handle the hash tags.
XML
Maybe I could use xml files to attach meta data to the files and somehow search using them.
Database File System
I recently came across this concept in my search. Not sure how it would apply Windows.
I have found a partial solution. First, I discovered that the laptop I'm using is actually logged in as Administrator. As an alternative to options 3 and 4, I have decided to use hard-links, which are part of the NTFS file system. However, due to the large number of files, this is unmanageable using the following command from an elevated command prompt:
mklink /h <source\file> <target\file>
Luckily, Hermann Schinagl has created the Link Shell Extension application for Windows Explorer and a very insightful reading of how Junctions, Symbolic Links, and Hard Links work. The only reason that this is currently a partial solution, is due to a separate problem with Windows Explorer, which I intend to post as a separate question. Thank you Hermann.
I have two folders with sub folders. One folder is the base folder, and the other one is the local folder. In these folders and subfolders consist identical as well as non identical files.
I want compare these folders and report the result in some other file. This report must consist of:
List of all identical files
List of all modified files
List of any missing file or folder
List of all non identical file
Is there a tool, batch script or utility for this?
I have tried WinMerge, but it's not the solution.
BeyondCompare is what you're looking for. It will give you all those reports in a matter of clicks.
UPDATE
As a follow up to your comment, you can also use this approach, which is as free as in beer, and does everything via command line.
Install a tool called diff tools, and you'll be able to do something like:
diff.exe
On the command line.
Araxis Merge is very good for this. I comes in a 64 bit version as well.
I have to recommend Beyond Compare for this, it's an excellent folder and file comparison tool. You can integrate it into Visual Studio as well for the file comaprison for a much better merging experience as well.
You can use WinDiff, a graphical file-comparison program.