i wanted to create via Excel or Oracle a database for a Storage room that is filled with all kinds of Computer parts and stuff.
I never created something like that, so i wanted to know if you could help me out giving me an advice how to create a database for a beginner
It should be possible to insert and remove parts or even update them
Hope my question is readable and understandable.
Thanks
A simple option to do that - not only the table so that you could write your own DML statements (to insert, update or delete rows) - but to create a nice application - is to use Oracle Application Express (Apex).
Depending on database version you use, it might already be installed by default. If not, ask your DBA to install it.
Alternatively, create a free account on apex.oracle.com; you'll get limited space (more than enough to do what you want to do).
In Application Builder, use the Excel file you have as a "source" which will then be used by Apex's wizard to create a table in the database, as well as application, true GUI which works and looks just fine.
If you don't have anything at all, not even an Excel file, well ... that's another problem and requires some more work to be done.
you have to know what you want (OK, a storage room)
is a single table enough to contain all information you'd want to collect?
if so, which columns (attributes) do you want to collect?
if not (for example, you'd want to "group" items), you'd need at least two tables which will be related to each other by the means of master-detail relationship, which also means that you'll have to create a foreign key constraint
which datatypes are appropriate for certain attributes? You wouldn't store item names into number datatype, right? Nor should you put dates (when item entered the storage room) as a string in varchar2 column, but into a date datatype column
etc.
Basically, YMMV.
Related
Problem: Junior SQL dev here, working with a SQL Server database where we have many functions that use temp tables to pull data from various tables to populate Crystal reports etc. We had an issue where a user action in our client caused a string to overflow the defined NVARCHAR(100) character limit of the column. As a quick fix, one of our seniors decided on a schema change to set the column definition to NVARCHAR(255), instead of fixing the issue of the the string getting too long. Now, we have lots of these table based functions that are using temp tables referencing the column in question but the temp table variable is defined as 100 instead of 255.
Question: Is there an easy way to find and update all of these functions? Some functions might not reference the table/column in question at all, but some heavily rely on this data to feed reports etc. I know I can right click a table and select "View Dependencies" in SQL Server Management Studio, but this seems very tedious to have to go through all of them and then update our master schema before deploying it to all customers.
I thought about a find and replace if there is a way to script or export the functions but I fear a problem I will run into is one variable in one function might be declared as TransItemDescription NVARCHAR(100) and one might be TransItemDesc NVARCHAR (100). I've heard of people avoiding temp tables maybe because of these issues so maybe there is just bad database design here?
Thus far I've been going through them one at a time using "View Dependencies" in SSMS.
I think the best solution would be to script out the whole database into a single script from SSMS. Then use Notepad++ (or equivalent) to either find:
All occurrences of NVARCHAR(100)
All occurrences of the variable name, e.g. TransItemDescription, TransItemDesc.
Once you have found all occurrences then make a list of all of the functions to be fixed. Then you would still need to do a manual fix to all functions, but once complete the issue should be totally resolved.
My application has a database table that is used to record the attendance of employees. And the column attedance_status has only three possible values - "present", "absent", "on_leave", and NULL as default.
How do I add it to the database? So far I have come up with two possible ways.
Create another table attendance_status with status_id and status_value and add the above values to it. And then use the id in the application for all SQL queries.
Probably the bad way. Hardcode the values (maybe in a config file) and use it throughout the app's SQL queries.
Am I missing the right way? How should this be approached?
Either will work, but Option 1 will give you flexibility in the event that the requirements change and is the standard data model. I would, however, name my columns a little differently. I would have id, value, name. Then the references become attendance_status.id and attendance_status.value. The third column is available for use in displays or reports or whatever. value is on_leave and name is On leave.
Option 2 works provided the data input point is totally closed. If someone codes new functionality there is the risk that he or she will invent something different to mean the same thing like onLeave.
I have a scenario where Java developer has made the change to the variable which used to transfer the data from column - col of table - tbl.
Now, I have to change the column varchar(15) to varchar(10). But, before making this change - have to handle the existing data and the constraints/dependencies on same column.
What should be the best sequence of doing so?
I am thinking to check the constraints first, then trim the existing data and then alter the table.
Please suggest how to handle constrains/dependencies and before handling it, how to check such dependencies.
Schema-evolution (the DDL changes that happen over time to tables and columns in a database, while preserving existing data and functionality) is a well understood topic with several solutions, some of which are RDBMS independent, others are built-in to the RDBMS solution.
A key requirement for production environments is to need both a forward-change and a backout, which can be run unattended.
Many open source advocates use Liquibase which also has a commercial variant.
Db2 for Linux/Unix/Windows also offers a built-in stored-procedure SYSPROC.ALTOBJ which helps to automate various schema-evolution alterations, including decreasing the size of a column. You would need to study its documentation carefully and test it fully on non-production environments until you are satisfied. Read about it here
https://www.ibm.com/support/knowledgecenter/en/SSEPGG_11.1.0/com.ibm.db2.luw.sql.rtn.doc/doc/r0011934.html
You can grow-your-own script of course, in whatever language you prefer, including SQL, but remember you should also build and test a back-out script.
I'm working on a data conversion utility which can push data from one master database out to a number of different databases. The utility its self will have no knowledge of how data is kept in the destination (table structure), but I would like to provide writing a SQL statement to return data from the destination using a complex SQL query with multiple join statements. As long as the data is in a standardized format that the utility can recognize (field names) in an ADO query.
What I would like to do is then modify the live data in this ADO Query. However, since there are multiple join statements, I'm not sure if it's possible to do this. I know at least with BDE (I've never used BDE), it was very strict and you had to return all fields (*) and such. ADO I know is more flexible, but I don't know quite how flexible in this case.
Is it supposed to be possible to modify data in a TADOQuery in this manner, when the results include fields from different tables? And even if so, suppose I want to append a new record to the end (TADOQuery.Append). Would it append to two different tables?
The actual primary table I'm selecting from has a complimentary table which is joined by the same primary key field, one is a "Small" table (brief info) and the other is a "Detail" table (more info for each record in Small table). So, a typical statement would include something like this:
select ts.record_uid, ts.SomeField, td.SomeOtherField from table_small ts
join table_detail td on td.record_uid = ts.record_uid
There are also a number of other joins to records in other tables, but I'm not worried about appending to those ones. I'm only worried about appending to the "Small" and "Detail" tables - at the same time.
Is such a thing possible in an ADO Query? I'm willing to tweak and modify the SQL statement in any way necessary to make this possible. I have a bad feeling though that it's not possible.
Compatibility:
SQL Server 2000 through 2008 R2
Delphi XE2
Editing these Fields which have no influence on the joins is usually no problem.
Appending is ... you can limit the Append to one of the Tables by
procedure TForm.ADSBeforePost(DataSet: TDataSet);
begin
inherited;
TCustomADODataSet(DataSet).Properties['Unique Table'].Value := 'table_small';
end;
but without an Requery you won't get much further.
The better way will be setting Values by Procedure e.g. in BeforePost, Requery and Abort.
If your View would be persistent you would be able to use INSTEAD OF Triggers
Jerry,
I encountered the same problem on FireBird, and from experience I can tell you that it can be made(up to a small complexity) by using CachedUpdates . A very good resource is this one - http://podgoretsky.com/ftp/Docs/Delphi/D5/dg/11_cache.html. This article has the answers to all your questions.
I have abandoned the original idea of live ADO query updates, as it has become more complex than I can wrap my head around. The scope of the data push project has changed, and therefore this is no longer an issue for me, however still an interesting subject to know.
The new structure of the application consists of attaching multiple "Field Links" on various fields from the original set of data. Each of these links references the original field name and a SQL Statement which is to be executed when that field is being imported. Multiple field links can be on one single field, therefore can execute multiple statements, placing the value in various tables, etc. The end goal was an app which I can easily and repeatedly export a common dataset from an original source to any outside source with different data structures, without having to recompile the app.
However the concept of cached updates was not appealing to me, simply for the fact pointed out in the link in RBA's answer that data can be changed in the database in the mean-time. So I will instead integrate my own method of customizable data pushes.
We have a program in which each user is given their own Access database. We'd like to merge these all together into a single SQL Server database.
The problem is that, using the SQL Server import/export wizard, the primary/foreign keys do not get updated. So for instance if one user has this table:
1 Apple
2 Banana
and another user has this:
1 Coconut
2 Cheeseburger
the resulting table looks like this:
1 Apple
2 Banana
1 Coconut
2 Cheeseburger
Similarly, anything that referenced Banana by its primary key (2) is now referencing both Banana and Cheeseburger, which will not make the vegans very happy.
Is there any way to automatically update the primary/foreign key references when importing, other than writing an extremely long and complex import-script?
If you need to keep them fully compartmentalized, you have to assign some kind of partitioning column to each table. Is there a reason you need your SQL Server to have the same referential integrity as Access? Are you just importing to SQL Server for read-only reporting? In that case, I would not bother with RI. The queries will all require a partitionid/siteid/customerid. You could enforce that for single-entity access by wrapping tables with a table-valued UDF which required the partitionid. For cross-site that doesn't work.
If you are just loading to SQL Server for reporting, I would also consider altering the data model to support reporting (i.e. a dimensional model is sometimes better than a normalized model) instead of worrying about transaction processing.
I think we need to know more about the underlying goals.
Need more information of requirements.
My basic question is 'Do you need to preserve the original record key?' e.g. 1:apple in table T of user-database A; 1:coconut in table T of user-database B. Table T is assumed to have the same structure in all database instances. Reasons I can suppose that you may want to preserve the original data: (a) you may have a requirement to the reference the original data (maybe a visual for previous reporting), and/or (b) there may be a data dependency in the application itself.
If the answer is 'no,' then you are probably interested only in preserving all of the distinct data values. Allow the SQL table to build using a new key and constrain the SQL table field such that it contains unique data. This approach seems to preserve the original table structure (but not the original key value or its 'location') and may suffice to meet your requirement.
If the answer is 'yes,' I do not see a way around creating an index that preserves a pointer to the original database and the key that was created in its table T. This approach would seem to require an application modification.
The best approach in this case is probably to split the incoming data into two tables: one to identify the database and original key, another to identify the distinct data values. For example: (database) table D has records such as 'A:1:a,' 'A:2:b,' 'B:1:c,' 'B:2:d,' 'B:15:a,' 'C:8:a'; (data) table T1 has records such as 'a:apple,' 'b:banana,' 'c:coconut,' 'd:cheeseburger' where 'A' describes the original database 'location,' 1 is the original value in location 'A,' and 'a' is a value that equates records in table D and table T1. (Otherwise you have a lot of redundant data in the one table; e.g. A:1:apple, B:15:apple, C:8:apple.) Also, T1 has a structure similar to the original T and is seems to be more directly useful in the application.
Ended up creating an SSIS project for this. SSIS is a visual programming tool made by Microsoft (and part of their "Business Integration Studio", which comes with SQL Server) designed for solving exactly these sorts of problems.
Why not let Access use its replication manager to merge the databases? This will allow you to identify the conflicts and resolve them before importing to SQL Server. I'm fairly confident it will retain the foreign key relationships. If I understand your situation correctly, and the databases are the same structure with different data, you could load the combined database to the application and verify the data before moving to SQL Server.
What version of Access are you using? Here's a link for Access 2000. Use the language to adjust search parameters to fit your version.
http://technet.microsoft.com/en-us/library/cc751054.aspx