I need an Access database that will be used by about 6-10 people, but not on a share drive [closed] - database

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I work on a project that has very well defined lines of responsibility. There are about six to ten of us and we currently do all of our work in Excel, building a single spreadsheet with maintenance requirements for ships. A couple of times during the project process we stop all work and compile all of the individual spreadsheets into one spreadsheet. Since each person had a well defined area, we don't have to worry about one person overwriting another person's work. It only takes an hour, so it isn't that huge of a deal. Less than optimal, sure, but it gets the job done.
But each person fills out their data differently. I think moving to a database would serve us well by making the data more regimented with validation rules. But the problem is, we do not have any type of share drive or database server where we can host the database, and that won't change. I was wondering if there was a simple solution similar to the way we were handling the Excel spreadsheet. I envisioned a process where I would wipe the old data and then import the new data. But I suspect that will bring up other problems.
I am pretty comfortable building small databases and using VBA and whatnot. This project would probably have about six tables, and probably three that would have the majority of the data for any given project (the others would be reference tables and slow-to-change data). Bottom line is, I am wondering if it is worth it, or should I stick with Excel?

Access 2007 onwards has an option for "Collecting email replies" which can organise flat data, but it can only be a single query that's populated so might be a bit limiting.
The only solution I can think of that's easier than you currently use is to create the DB with some VBA modules that export all new/updated data to an XML/csv file and attached this to an email. You'd then have to create a VBA module that would import the data from these files into the current table.
It's a fair amount of work to get set up but once working might be fairly quick and robust.
Edit, just to add, I have solved a similar problem but I solved it with VB.net and XML files rather than Access.

You can link Access databases to other databases (or import from them). So you can distribute a template database for users to add records to and then email back. When receiving back, you would either import or link them to a master database and do whatever you needed to do with the combined data.

Related

Should databases be separated based on size and load? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 months ago.
Improve this question
I'm developing a web backend with two modules. One handles a relatively small amount of data that doesn't change often. The other handles real-time data that's constantly being dumped into the database and never gets changed or deleted. I'm not sure whether to have separate databases for each module or just one.
The data between the modules is interconnected quite a bit, so it's a lot more convenient to have it in a single database.
But anything fails, I need the first database to be available for reads as soon as possible, and the second one can wait.
Also I'm not sure how much performance impact the constantly growing large database would have on the first one.
I'd like to make dumps of the data available to public, and I don't want users downloading gigabytes that they don't need.
And if I decide to use a single one, how easy is it to separate them later? I use Postgres, btw.
Sounds like you have a website with its content being the first DB, and some kind of analytics being the second DB.
It makes sense to separate those physically (as in on different servers). Especially if one of those is required to be available as much as possible. Separating mission critical parts from something not that important is a good design. Also, smaller DB means shorter recovery times from a backup, if such need to arise.
For the data that is interconnected, if you need remote lookup from one DB into another, Foreign Data Wrappers may help.

Convention for Database Creation [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We're developing a new product at work and it will require the use of a lightweight database. My coworkers and I, however, got into a debate over the conventions for database creation. They were of the mindset that we should just build quick outline of the database and go in and indiscriminately add and delete tables and stuff until it looks like what we want. I told them the proper way to do it was to make a script that follows a format similar to this:
Drop database;
Create Tables;
Insert Initial Data;
I said this was better than randomly changing tables. You should only make changes to the script and re-run the script every time you want to update the design of the database. They said it was pointless and that their way was faster (which holds a bit of weight since the database is kind of small, but I still feel it is a bad way of doing things). Their BIGGEST concern with this was that I was dropping the database, they were upset that I was going to delete the random data they put in there for testing purposes. That's when I clarified that you include inserts as part of the script that will act as initial data. They were still unconvinced. They told me in all of their time with databases they had NEVER heard of such a thing. The truth is we all need more experience with databases, but I am CERTAIN that this is the proper way to develop a script and create a database. Does anyone have any online resources that clearly explain this method that can back me up? If I am wrong about this, then please fell free to correct me.
Well, I don't know the details of your project, but I think its pretty safe to assume you're right on this one, for a number of very good reasons.
If you don't have a script that dictates how the database is structured, how will create new instances of it? What happens when you deploy to production or it gets accidentally deleted or the server crashes? Having a script means you don't have to remember all the little details of how it was all set up (which is pretty unlikely even for small databases).
It's way faster in the long run. I don't know about you, but in my projects I'm constantly bringing new databases online for things like unit testing, new branches, and deployments. If I had to recreate the database from by hand every time it would take forever. Yes it takes a little extra time to maintain a database script but it will almost always save you time over the life of the project.
It's not hard to do. I don't know what database you're using but many of them support exporting your schema as a DDL script. You can just start with that and modify it from them on. No need to type it all up. If your database won't do that, it's worth a quick search to see if a 3rd party tool that works with your database will do it for you.
Make sure your check your scripts into your source control system. It's just as important as any other part of your source code.
I think having a data seeding script like you mentioned is a good idea. But keep it as a separate script from the database creation script. This way your can a developer seed script, a unit testing seed script, a production seed script, etc.

Database schema for Partners [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
We have an application to manage company, teams, branches,employee etc and have different tables for that. Now we have a requirement that we have to give access of same system to our technology partners so that they can also do the same thing which we are doing. But at the same time we need to supervise these partners in our system.
So in terms of DB schema what will be the best way to manage them:
1)To duplicate the entire schema for partners, and for that we have to duplicate around 50-60 tables and many more in future as system will grows.
2)To create some flag in each table which will tell it is internal or external entity.
Please suggest if anyone has any experience.
Consider the following points before finalizing any of the approaches.
Do you want a holistic view of the data
By this I mean that do you want to view the data your partner creates and which you create in a single report / form. If the answer is yes then it would make sense to store the database in the same set of tables and differentiate them based on some set of columns.
Is your application functionality going to vary significantly
If the answer to this question is NO then it would make sense to keep the data in the same set of tables. This way any changes you do to your system will automatically reflect to all the users and you won't have to replicate your code bits across schemas / databases.
Are you and your partner going to use the same master / reference data
If the answer to this question is yes then again it makes sense to use the same set of tables since you will do away with unnecessary redundant data.
Implementation
Rather than creating a flag I would recommend creating a master table known as user_master. The key of this table should be made available in every transaction table. This way if you want to include a second partner down the line you can make a new entry in your user_master table and make necessary modifications to your application code. Your application code should manage the security. Needless to say that you need to implement as much security as possible at the database level too.
Other Suggestions
To physical separate data of these entities you can either implement
partitioning or sharding depending upon the db you are using.
Perform thorough regression testing and check that your data is not
visible in partner reports or forms. Also, check that partner is not
able to update or insert your data.
Since the data in your system will increase significantly it would
make sense to performance test your reports, forms and programs.
If you are using indexes then you will need to revisit those since
your where conditions would change.
Also, revisit your keys and relationships.
None of your asked suggestion is advisable. You need to follow given guideline to secure your whole system and audit your technology partner as well.
[1]You should create a module on Admin side which will show you existing tables as well table which will be added in future.
[2]Create user for your technology partner and provide permission on those objects.
[3]Keep one audit-trail table, and insert entry of user name/IP etc.in it. So you will have complete tracking of activity carried out by your technology partner.

Log file not stored [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I need to do research about log files not stored in the database. I do not know much about database systems so i need someone to give at least some ideas about it. What i was told is that some of the log files was not written in a bank's database.Log files are coming from various sources like atms,website vs. For example, the reason could be high rate of data flow causing some data to be left out.
The question is what are the reasons behind it and what could be the solutions to them?
I would really appreciate if you could share some articles about it.
Sorry if i could not explain it well. Thanks in advance
Edit:what i meant was not there is a system not writing some of log files to database intentionally. What i tried to mean is that some of the log files are not written into database and the reason is not known and my intention is to identify the possible reasons and solutions to them.the database belongs to a bank and as you can imagine, lots of data is flowing to database per second
Well, the questions is not very clear, so let me rephrase it:
What are the reasons why application logs are not stored in a database
It depends of the context, and there are different reasons:
First question, is why you might store logs in database? Usually you do it because they contains relevant data to you that you want to manipulate.
So why not store always these datas:
you are not interested by the log, except when something goes wrong, but then it's more debugging than storing log.
you don't want to mix business data (users, transaction, etc...) with not so important / relevant data
the amount of log is too important for your current system and putting them in a database might crash it completly
you might want to use another system to dig into the log, with a different typoe of storage (haddop, big data, nosql )
when you do database backup, you usually backup all the database. Logs are not 'as important' as other critical data, are bigger, and then would take too much place
there is no need to always put logs in database. Using plain text and some other tools (web server log for instance) is usually more than enough.
So that's for these reason that logs are in general not stored in the same database than the application.

Best way to archive sql data and show in web application whenever required [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I have around 10 tables containing millions of rows. Now I want to archive 40% of data due to size and performance problem.
What would be best way to archive the old data and let the web application run? And in the near future if I need to show up the old data along with existing.
Thanks in advance.
There is no single solution for any case. It depends much on your data structure and application requirements. Most general cases seemed to be as follows:
If your application can't be redesigned and instant access is required to all your data, you need to use more powerful hardware/software solution.
If your application can't be redesigned but some of your data could be count as obsolete because it's requested relatively rearely you can split data and configure two applications to access different data.
If your application can't be redesigned but some of your data could be count as insensitive and could be minimized (consolidated, packed, etc.) you can perform some data transformation as well as keeping full data in another place for special requests.
If it's possible to redesign your application there are many ways to solve the problem.In general you will implement some kind of archive subsystem and in general it's complex problem especially if not only your data changes in time but data structure changes too.
If it's possible to redesign your application you can optimize you data structure using new supporting tables, indexes and other database objects and algorythms.
Create archive database if possible maintain different archive server because this data wont be much necessary but still need to be archived for future purposes, hence this reduces load on server and space.
Move all the table's data to that location. Later You can retrieve back in number of ways:
Changing the path of application
or updating live table with archive table

Resources