VB.NET Copying Database template files to selected folder location during installation - database

net project as well as a setup project. I also have it so that during installation it asks the users to enter a file location to store their database. the plan is to have an empty .mdf file, with all the tables setup, copied into that folder and I store the folder path in a config file.
this is mainly because I am planning on having multiple separate applications that all need the ability to access the same database. I have it storing the folder path in my config file the only thing I'm having trouble with is
storing the template files I don't know if i should do this in the setup project or main project
how to copy said template files into a new folder
so far I have been unable to find a solution so any help is appreciated

Well here is what I do this in a few of my projects - something that has proven reliable enough for me over the years (which you may or may want to do as well):
I have the program itself create the database files in an initialization routine. First however, it creates the sub folders in which the database files will be stored, if they don't already exist.
To do this, the program just checks if the folder exists and if the database file exists and if they do not, it creates them on the spot:
If Directory.Exists(gSQLDatabasePathName) Then
Else
Directory.CreateDirectory(gSQLDatabasePathName)
End If
If File.Exists(gSQLiteFullDatabaseName) Then
Else
...
I also have the program do some other stuff in the initialization routine, like creating an encryption key to be used when storing / retrieving the data - but that may be more than you need (also, for full disclosure, this has some rare issues that I haven't been able to pin down).
Here too are some addition considerations:
I appreciate you have said that you want to give the user the choice of where to store their database files. However, I would suggest storing them in the standard locations
Where is the correct place to store my application specific data?
and only allowing the users to move them if the really need to (for example if the database needs to be shared over the network) as it will make the support of your app harder if every user has their data stored in different places.
I have found letting the user see in their options/settings windows where their database is stored is a good idea.
Also to encourage them to back those files /directories up.
Also to create automatic backups of several generations for the user.
Hope this helps.

Related

Is .Alp file enough to run model on other device

I'm currently not having the option to check this question since I do not have 2 computers running Anylogic.
My question is if you share your anylogic .Alp file with somebody, is he able to open the model.
Or should you also need to share the things in your model file when you save it(3d map, databasemap and pictures)
Or do you even need more since when you save an anylogic file you also need to name the Java package.
I'm asking this because my computer is running on his end and I need to know what I've to put in the cloud when my computer will crash and will not be able to use it.
Im currently have put the map from the model map in the cloud and the database excel files. (But no file about the Java package cause I could not find it)
You only need the .alp file itself and all items that are in the model folder where the .alp sits. This can include:
images
database subfolder
lib subfolder (holding any required jar files)
logs subfolder
outputs subfolder
So if you simply zip the entire model folder and send it over, the other person unzips it AND IF he/she uses the same AnyLogic version, all will work.
If it is an older version of AnyLogic on the other end, it may not work (but also it may, it depends on the version difference).
Also, if you created the model with the Professional version and you used some elements that are not accessible in the free PLE version and if the other person only uses the PLE, it will also not work.
If you really just need the other person to run the model, consider compiling it or using the AnyLogic cloud.

How to deal with issues when storing uploaded files in the file system for a web app?

I am building a web application where the users can create reports and then upload some images for the created reports. Those images will be rendered in the browser when the user clicks a button on the report page. The images are confidential and only authorized users will be able to access them.
I am aware of the pros and cons of storing images in database, in filesystem or a service like amazon S3. For my application, I am inclined to keep the images in the filesystem and paths of the images in the database. That means I have to deal with the problems arising around distributed transaction management. I need some advice on how to deal with these problems.
1- I believe one of the proper solutions is to use technologies like JTA and XADisk. I am not very knowledgeable about these technologies but I believe 2 phase commit is how automicity is achieved. I am using MySQL as the database, and it seems like 2 phase commit is supported by MySQL. Problem with this approach is XADisk does not seem to be an active project and there is not much documentation about it and there is the fact that I am not very knowlegable about the ins and outs of this approach. I am not sure if I should invest in this approach.
2- I believe I can get away with some of the problems arising from the violation of ACID properties for my application. While uploading images, I can first write the files to disk, if this operation succeeds I can update the paths in the database. If database transaction fails, I can delete the files from the disk. I know that is still not bulletproof; an electricity shortage might occur just after the db transaction or the disk might not be responsive for a while etc...I know there are also concurrency issues, for instance if one user tries to modify the uploaded image and another tries to delete it at the same time, there will be some problems. Still the chances for concurrent updates in my application will be relatively low.
I believe I can live with orphan files on the disk or orphan image paths on the db if such exceptional cases occur. If a file path exists in db and not in the file system, I can show a notification to the user on report page and he might try to reupload the image. Orphan files in the file system would not be too much problem, I might run a process to detect such files time to time. Still, I am not very comfortable with this approach.
3- The last option might be to not store file paths in the db at all. I can structure the filesystem such that I can infer the file path in code and load all images at once. For instance, I can create a folder with the name of report id for each report. When a request has been made to load images of the report, I can load the images at once since I know the report id. That might end up with huge number of folders in the filesystem and I am not sure if such a design is acceptable. Concurrency issues will still exist in this scheme.
I would appreciate some advice on which approach I should follow.
I believe you are trying to be ultra-correct, and maybe not that much is needed, but I also faced some similar situation some time ago and explored also different possibilities. I disliked options aligned to your option 1, but about the 2 and 3, I had different successful approaches.
Let's sum up first the list of concerns:
You want the file to be saved
You want the file path to be linked to the corresponding entity (i.e the report)
You don't want a file path to be linked to a file that doesn't exist
You don't want files in the filesystem not linked to any report
And the different approaches:
1. Using DB
You can assure transactions in the DB pretty much with any relational database, and with S3 you can ensure read-after-write consistency for both new objects and upload of new objects. If you PUT an object and you get a 200 OK, it will be readable. Now, how to put all this together? You need to keep track of the process. I can figure 2 ways:
1.1 With a progress table
The upload request is saved to a table with anything need to identify this file, report id, temp uploaded file path, destination path, and a status column
You save the file
If the file safe fails you can update the record in the table, or delete it
If saving the file is successful, in a transaction:
update the progress table with successful status
update the table where you actually save the relationship report-image
Have a cron, but not checking the filesystem, but checking the process table. If there is any file in the filesystem that is orphan, definitely it had been added to the table (it was point 1). Here you can decide if you will delete the file, or if you have enough info, you can continue with the aborted process triggering the point 4.
The same report-image relationship table with some extra status columns.
1.2 With a queue system
Like RabbitMQ, SQS, AMQ, etc
A very similar approach could be done with any queue system instead of a db table. I wont give much details because it depends more on your real infrastructure, but just the general idea.
The upload request goes to a queue, you send a message with anything you may need to identify this file, report id, and if you want a tentative final path.
You upload the file
A worker reads pending messages in the queue and does the work. The message is marked as consumed only when everything goes well.
If something fails, naturally the message will come back to the queue
In the next time a message is read, the worker can have enough info to see if there is work to resume, or even a file to delete if resuming is not possible
In both cases, concurrency problems wont be straightforward to manage, but can be managed (relying on DB locks in fist case, and FIFO queues in second cases) but always with some application logic
2. Without DB
To some extent a system without a database would be perfectly acceptable, if we can defend it as a proper convention over configuration design.
You have to deal with 3 things:
Save files
Read files
Make sure that the structure of the filesystem is manageable
Lets start with 3:
Folder structure
In general, something like one folder for report id will be too simple, and maybe hard to maintain, and also ultimately too plain. This will cause issues, because if we have a folder images with one folder per report, and tomorrow you have less say 200k reports, the images folder will have 200k elements, and even an ls will take too much time, same for any programing language trying to access. That will kill you
You can think about something more sophisticated. Personally like a way that I learnt from Magento 1 more than 10 years ago and I used a lot since then: Using a folder structure following first outside rules, but extended with rules derived extended with the file name itself.
We want to save a product image. The image name is: myproduct.jpg
first rule is: for product images i use /media/catalog/product
then, to avoid many images in the same one, i create one folder per every letter of the image name, up to some number of letters. Lets say 3. So my final folder will be something like /media/catalog/product/m/y/p/myproduct.jpg
like this, it is clear where to save any new image. You can do something similar using your reports id, categories, or anything that makes sense for you. The final objective is to avoid too flat structure, and to create a tree that makes sense to you, and also that can be automatized easily.
And that takes us to the next part:
Read and write.
I implemented a similar system before quite successfully. It allowed me to save files easy, and to retrieve them easily, with locations that were purely dynamic. The parts here were:
S3 (but you can do with any filesystem)
A small microservice acting as a proxy for both read and write.
Some namespace system and attached logic.
The logic is quite simple. The namespace lets me know where the file will be saved. For example, the namespace can be companyname/reports/images.
Lets say a develop a microservice for read and write:
For saving a file, it receives:
namespace
entity id (ie you report)
file to upload
And it will do:
based on the rules I have for that namespace, and the id and file name will save the file in this folder
it doesn't return the physical location. That remains unknown to the client.
Then, for reading, clients will use a URL that uses also convention. For example you can have something like
https://myservice.com/{NAMESPACE}/{entity_id}
And based on the logic, the microservice will know where to find that in the storage and return the image.
If you have more than one image per report, you can do different things, such as:
- you may want to have a third slug in the path such as https://myservice.com/{NAMESPACE}/{entity_id}/1 https://myservice.com/{NAMESPACE}/{entity_id}/2 etc...
- if it is for your internal application usage, you can have one endpoint that returns the list of all eligible images, lets say https://myservice.com/{NAMESPACE}/{entity_id} returns an array with all image urls
How I implemented this was with quite simple yml config to define the logic, and very simple code reading that config. That allowed me to have a lot of flexibility. For example save reports in total different paths or servers or s3 buckets if they belong to different companies or are different report types

Access database split problems

I am trying to split an Access database where I work but I have encountered a few issues that I am struggling to resolve. If I can first explain the problem.
I work for a large multi-national company that has on-site IT support but does not support Access (so no help there)
There are 12 of us working in our section, we have an old and badly designed StockMaster database on the networked F drive. The problem is that it is only set up for single users, we have to take turns using it. We aren't a computer savvy bunch, we tend to run the same named queries on a daily basis
The database is only updated once per day, every morning we get a download from our colleagues in Amsterdam. I do not want to play around with this database as first of all I'm no expert and secondly if I break it, no one will fix it.
My plan is this;
I have created a new Access database StockMaster2 that imports the required tables. Using VB coded modules, is deletes the old then imports the new. Therefore every morning it replicates what is in the original database and it works fine.
My next step is to split the database, create the front end and distribute. This is where I'm having problems.
I created the original front end StockMaster2_fe.accde and placed it in the database folder on the F:\ drive. Does every user get their own copy of the front end? I copied and saved two more front ends (copy and paste in the same folder -> rename) namely StockMaster2_alan_fe and StockMaster2_ryan_fe and tested it. I told Ryan (who sits next to me) to find the front end named after him on the F:\ drive and open it whilst I was in ...alan_fe. We both went to run macros at the same time but he was kicked out as it gave me exclusive access.
What am i doing wrong? Why is it not allowing multiple access?
My problem is that due to strict administrator privileges I cannot download any software or access the command line, so anything that I do must be done in Access itself
I apologize for not seeing this post sooner to end your agony. There are two absolute main issues that must be resolved to get you on the right track. First, and perhaps the most important, is that your file has the name of StockMaster2_fe.accde. The extension, the accde, is the executable version. Design changes cannot be made to that version. The extension should say .accdb to provide you with all the flexibility to alter the database, create one database for back-end tables, and a second database for front-end objects to include queries, forms, reports, macros and modules. If you have the accdb version, then your work will start to get much easier.
Issue number two, if your team is not able to share the database, then that is a sign that the database, when first opening, is opening in Exclusive mode. This option can be changed in the Access Options, in the Advance menu, under Advanced section. Look for Default open mode. It should say Shared to have multiple users operating all at once.
A possible hidden issue that can be happening, is that the database has VBA code which informs the database to open exclusively. With your version of the accde, you will not be able to access that code or change how the database opens.
Let's break this down (only because I finished all my work already...):
My next step is to split the database, create the front end and distribute. This is where i'm having problems. I created the original front end StockMaster2_fe.accde and placed it in the database folder on the F drive. Does every user get their own copy of the front end?
Yes
I copied and saved two more front ends (copy and paste in the same folder -> rename) namely StockMaster2_alan_fe and StockMaster2_ryan_fe and tested it. I told Ryan (who sits next to me) to find the front end named after him on the F drive and open it whilst i was in ...alan_fe. We both went to run macros at the same time but he was kicked out as it gave me exclusive access. What am i doing wrong?
Ensure your back end contains only tables. Access is a "client-centric" database, which means when a query is run it pulls all of the data over the pipe to your local computer, does what it does, and then sends it back. So, make sure the back end has only tables and all the other jazz (macros, queries, etc...) are in the front end. Also, the front end will contain links to the back-end tables. All of your queries/macros/etc.. will reference these links, and not the tables in the back-end DB directly.
Why is it not allowing multiple access?
Also, make sure your table-locking scheme is multi-user friendly. If you're doing table locking, it will cause errors. If you're doing record locking, it probably won't.
My problem is that due to strict administrator privileges i cannot download any software or access the command line, so anything that i do must be done in Access itself.
Shouldn't be a problem at all.

Silverlight - Copy files directly into where Isolated Storage physically store files

In our Silverlight business application, we need to cache very large files (100's MBs) into isolated storage. We distribute the files separately to be downloaded by users and then they can import those files into Isolated Storage through the application.
However, the Isolated Storage API seem to be very slow and it takes an hour to import about 500MB of data.
Given, that we're in a corporate environment where users trust us, I would like users to be able to copy the files directly into the physical location on their file system where Silverlight store files when using the API.
The location varies per OS, but that's ok. The problem however, is that Silverlight seems to store files in a somewhat cryptic way. If I go to my AppData\LocalLow\Microsoft\Silverlight\is, I can see some weirdly named folder that look like long Guid.
My question: is it possible to copy files directly in there, or is it going to upset Silverlight ?
From what I've been testing it will make stuff fail/act weird. We had some stuff we had to clear and even though we did delete the files to test how it worked the usedspace didn't drop. So there is some sort of register of which files are in IS and how big they are.
I think it would be paramount you find out why IS is so slow. Can you confirm its like that on all clients? Test some others. This should be brought up to Microsoft if it is the case. Possibly you can change your serailization schema and save smaller files? I would not advise trying to figure out Microsoft's temporary and volatile IS storage location.

Should data files be stored on the same computer (server) the database is stored in?

Currently in our research group, we have many "data files" stored on three servers and a couple of personal computers running different operating systems.
We want to build a database, which would store some information in addition to the URLs of those various "data files". My question is, do we have to copy all the data files and put them in a directory in the same server the database is in? Or can they be left as they are on the different computers? If the second case is ok, what would be the format of the url of the "data files"?
It really depends on what your intended goal is and what your current setup is like
If the files are currently sitting somewhere on the network, and you need a path that the application can use to access them, you just need to store the network path (\\server\share\file for Windows environments) in the database, then read it and access that path to access the files. You'll need to make sure everyone has read access to them.
If the files are currently accessible through a website URL, internal or external, then again, you just need to store that URL (or some portion thereof) (http://mywebsite.com/myfile or http://servername/myfile) and access that.
If either of the above are not currently true, but you want them to be, then you'll need to set up a new share/webserver and put the files there. There's no requirement that this be the same server as the database, but it'd make for better backups if it was.
If you want the files themselves to be in the database, you should check out Bob Fanger's link.
Not sure what you're asking here but...
If you want your database engine to read files filled with data, it probably doesn't matter where they are stored - though this may depend on the database you are using. Are you using MySQL? MS-SQL Server? Oracle?
Many database vendors provide relatively easy-to-use admin tools that would let you choose a file to be loaded, and usually the file chooser dialoge lets you browse networks so you could load a file over the network. Details on how to do this vary so consult the manual for your database engine for loading data from a pre-existing file.
Be aware that if the database is on Computer A and the data is being loaded from Computer B over the network, it will probably be slower than if the data was on the same computer as the database.
It doesn't really matter if the files are stored outside the database anyway.
See Storing Images in DB - Yea or Nay? for more thoughts on that one.
If the files accessible by an url, you can store that with the meta data, like
http://server1/folder/file.ext, file://\server1\folder\file.ext or "file://P:\folder\file.ext"
Things to consider:
Backups
Performance
Synchronisation between the meta-data and the data

Resources