Restricting changes in one view instance in ADF - oracle-adf

I have created two instances of one particular VO(view) in my application module. Now, when I have made changes (did not commit yet) in one view instance, it is reflected in another view instance also. For ex: let say there is StudentVO and I have created two instances(std1,std2) of StudentVO in my AM. I have queried both VOs (std1,std2) for one particular student,i.e. both are currently holding same student record. Now I have modified one attribute, lets say marks for std1. It is reflected in std2 as well. Is there a way to stop them ?...I have to see old marks in std2 and modified marks in std1.

This happens because your VO is based on a EO. This would not happen for a VO based on a query, but of course, you need the EO based VOs to update records.
You would need two different VO’s based on different EOs, or one of the VOs to be based on a Query.

Related

VB/SQL - How To Filter DataGridView From Joined Table With Multiple Matches

I'm using SQL Server and Visual Studio 2019.
I'm looking for advise on the best way to filter my documents list (in DataGridView) based on what view they appear in.
I have 3 database tables shown below:
I have simplified my Document table for this post, but there is quite a lot of details for each document displayed in the DataGridView.
I fill a DataTable which I use to apple into a DataView which is then used at my DataGridView DataSource.
Within the SQL that fills the DataTable I have a custom column that uses a JOIN to determine the View that each document appears in, I later then filter the document list based on the custom column.
This only really works if the document only appears in 1 view, as otherwise the JOIN retrieves multiple entries of the document to cover each View matched. (hope that makes sense).
I have simplified the View table for this post also, a View can contain Sub Views if you like i.e.:
View 1
> Sub View 1
> Sub View 2
> Child of Sub View 2
This means a Document can appear more than once in a View as it may appear in various Sub Views.
I'm wonder if I'm best to carry on retrieving the Custom Column or if I should be running a whole new SQL query and even creating a new DataTable / DataView based on the View table.
Some of our workers are remote and some projects can contain thousands of documents, so I want to ensure I used the most robust method, but also one that will cause the least delay on retrieving the information and give the user the best experience.
I hope I've explained everything enough for you guys to get the list of what I'm trying to achieve.
Thank you in advance and I would appreciate any help on this.
>
>
Edit - Following first answer post:
As you say I probably haven't explained enough, so I've put together a visual example of what I'm trying to achieve.
When a project is loaded all documents are listed, the TreeView hasn't been clicked so no filter is applied. All documents are listed:
If a user clicks "View 1 > Sub View 2" on the TreeView it filters based on the data in CustomJOINColumn, see below:
If a user clicks "View 1 > Sub View 1" or "Another View > Documents" on the TreeView it again filters based on the data in CustomJOINColumn, see below:
As you can see Document D001 has appeared in 3 different views/sub views.
The problem is shown in the first image. Document D001 is listed 3 times as it is associated with 3 views/sub views.
I only want Document D001 to appear once in the first list, but when the corresponding Node in the TreeView is clicked it filters correctly as shown.
I hope this makes sense.
There might be some details of your case that indicate a different solution, but from what you have here it sounds like you should drop the special column, drop the full-fill with filtering, and implement a focused fill for your DataGridView. It doesn't even involve much of a change, a slight modification rather than a major rewrite.
Why? Because especially if you have remote workers on a WAN, the performance hit of loading the full list of documents will be crushing if you have many large views. A focused load allows you to spend time loading only the relevant document details. You get the first 500ms or so "for free" with users, but if you take even 30 seconds to load and filter they'll think it's crashed. And your application takes more memory when it's stuffed full of grid rows of document details you'll never use. Plus, as you've already discovered, coming up with a filtering scheme based on membership in an arbitrary number of overlapping groups is really hard, and a focused load is really easy.
So how to do it?
First, go to your DataSet and modify the table adapter with "Add"->"Query". You'll define a new query that uses a parameter (#View) and returns data in the same format as the original, but only for documents in that view. Give it an obvious name, like when it offers a name "Fill" make it "FillByView"
Next, grab a copy of the code in your Form_Load event that fill the grid (something like tadaptDocs.Fill(tblDocList)) and comment it out there, but place a copy in the Change event for the control that the user selects the view in. Change it from "Fill" to "FillByView" and add the #View parameter.
You'll probably want the Form_Load event to set the view to the last View chosen (save in My.Settings) or the first view in the list, that will kick off a Change event and populate the DataGridView with an initial set of documents.
Now, every time the user picks a new view, the grid is loaded with just those documents in that view, regardless of what other views they may be in.
When you save the changes via "Update", VB automatically applies the changes to the underlying data, you don't have to worry that it will truncate your database table to just the documents in that view. It makes an internal distinction between records that were deleted and records that were never loaded. You also don't need to make new versions of the Update, Insert, or Delete methods on the TableAdapter, only Fill Needs a new version that considers View.
EDIT: The following added in response to additions made clarifying the problem
It sounds like a focused load is still called for, but only when the user selects a view. Your initial view should be a "fill all", but modified to strip duplicates. If you have many details in the grid you'll probably want to break that out into 2 parts with a CTE, example follows
;WITH cteCust as (
SELECT D.DocID as DocID, D.docNum , D.docTitle , V.CustJoinName as CustJoin
, ROW_NUMBER() OVER (PARTITION BY D.docNum ORDER BY V.viewTitle) as ViewRank
FROM #tblDoc as D LEFT OUTER JOIN #MapDocView as M on D.DocID = M.DocID
OUTER APPLY fnGetCustJoinName(M.ViewID) as V
) SELECT C.DocID , C.docNum , C.docTitle , C.CustJoin , D.docOther
FROM cteCust as C INNER JOIN #tblDoc as D ON C.DocID = D.DocID
WHERE ViewRank = 1
ORDER BY D.docNum
A few elements of this bear explaining, reply in comments if you need more details
cteCust extracts the core document info and adds the custom join columns, I'm not sure how you're doing it, but a table valued function is one way and I used it for simplicity, substitute whatever you're using. You said there was a lot of document info, this lets you work with just the core details and tie in the other stuff later. We're using a LEFT OUTER JOIN on the Map table because you want docs without a view to still show up in the unfiltered list.
ViewRank is how you get rid of duplicates, by partitioning a ROW_NUMBER function over the Documents, you get only the first view it's in. In the focused fill you want the view filtered on, but you said the initial fill should be just one view so each document appears only once.
Note that the final SELECT ties it back to the Documents table so you can pick up the extra fields.
NOTE: In your example you have a simple membership model - you want Docs that belong to a specific view not a hierarchical one where you want all docs in a view and all sub views (i.e. D006 is in Approved but does not show up when you select the parent Documents in example 3 of your updated question). If those requirements change and your customers want a recursive membership scan, you should post it as a second question, this Q&A is already huge and you'll get fresh eyes on it if it's a new question. Post a reference here too, but make it a new question.

Tableau - 2 Custom SQL Queries - One Parameter updates two sheets

I have two different data sets using two different SQL queries. Essentially one data set is day/caller stats rolled up the other set is call data. So each call data set rolls up to get their day/caller data.
I needed to separate these two queries for performance because I needed one extract and one parameterized custom query for the call data. So essentially I will always bring in this month of data and last month for the day/caller data.
What I need to do is create one dashboard, that has the caller and all of their stats aggregated for the time period. Then I need to be able to click a row to prompt all the call data in a different sheet on the same dashboard
I am at the home stretch and need a way to connect these two sheets and update the call data. Right now I only have a parameter for the Unique ID of the callers not time, I bring in all the same days of calls even though it is really not needed. In a perfect world I will click the report caller and my second query will update to the appropriate day range and Unique ID and produce only that callers calls. My problem right now is no matter what I do I cannot create the one sheet to update the second call sheet. I have successfully created a manually functioning report but I need the action to filter to a timer period and the specific caller.
Let me know if you have any feedback. My two issues are creating two separate queries caller data (225k rows help in export) call data (7 million rows if unfiltered) which needs to be a live connection so when sheet is clicked the parameters will update and those calls will populate. Anything would help!
The solution i can think of is to use an action filter and there is an option below to select the fields to map between the sheets.Choose selected fields instead of all fields and map the id and time between the two data sources.
Apart from this i dont really get what the issue is.If you need further clarifications please rephrase your questions and provide examples and your data structure.

Make Access Form Update With Buttons

I constructed an Access database for a group of end-users. This database is composed of one table, tblInventory, and several queries for them to edit their data quickly/easily. One of my queries, for example, is:
UPDATE tblInventory SET Amount = Amount-[Enter Amount]
WHERE ((([tblInventory].Equiptment_Name)=[Enter Name]));
This worked great in my opinion, but I have to please the end-user after all. They requested that I make a form and use buttons to update the data in the table for them. I have the form laid out like this:
The Equipment_Name and Amount boxes pull their information from my table, which has categories named that. My Unbound textbox field is where I would like them to be able to enter the number of the given part they would like to take out of inventory. The button should be to run my query above, but instead of prompting for inputs I would like it to use what they entered into the textbox. I've tried many different things and searched many different sites but cannot find what I'm looking for.
P.S Equiptment_Name and Amount are the only 2 datafields in the table besides other fields I have in the table to serve as more lenient ways to search for data when they entered in names. These fields are called things such as Alt_Name1 and have no real relevance to the form.
Thanks in advance for any help given.
There is a couple ways you can do it but the simplest way is:
Build your query as a predefined query(ies)
Build a Macro that disabled warnings then executes your query or queries in the order you wish to execute them.
Go to the form Define the button.
Go to the event tab.
Build an event
Set the OnClick Event to the name of the Macro.
Save and Test.

Optimization suggestions for sql server table

I have a table containing user input which needs to be optimized.
I have some ideas about how to solve this but i would really appreciate your input on this. The table that needs optimization is called Value in the structure below.
All tables mentioned below has integer primary keys called Id.
Specs: Ms Sql Server 2008, Linq2Sql, asp.net website, C#.
The current structure looks as follows:
Page -> Field -> FieldControl -> ValueGroup -> Value
Page
A pages is a container for one or more Fields.
Field
A field is a container for one or more FieldControls such as a textbox or dropdown-options.
Relationships: PageId
FieldControl
If a Field is of the type 'TextBox' then a single FieldControl is created for the Field.
If a Field is of the type 'DropDown' then one FieldControl per dropdown option is created for the Field containing the option text.
Relationships: FieldId
ValueGroup
Each time a user fills in Fields within a Page and saves it, a new ValueGroup (Id) is created to keep track of user input that is relevant to that save. When a user wants to
look at a previously filled in form, the valuegroup is used to load the Values into the FieldControls of that previously filled in instance.
Relationships: None
Value
The actual input of a FieldControl. If the user typed 'Hello' in a TextBox then 'Hello' would be stored in a row in this table followed by a reference back to which FieldControl 'Hello' was inputted for. A ValueGroup is linked to values in order to group them to keep track of which save/instance they belong to as described in ValueGroup.
Relationships: ValueGroupId, FieldControlId
The problem
If 100.000 Pages are fully filled in, containing 10 TextBoxes each then we get 100.000 * 10 records in the Values table meaning we quickly reach one million records making it really slow as it is now. The user can create as many different pages with as many different Fields as he/she likes and all these values are stored in the Values table. The way i use this data is by either displaying a gridview with pagination that displays all records for a single Pagetype, or when looking at a specific Page instance (Values grouped by ValueGroupId).
Some ideas that i have:
Good indexing should be very important when optimizing the Values table.
Should i perhaps add a foreign key directly back to Page from Value, ending up with indexing by (Id, PageId, ValueGroup) allowing the gridview to retrieve values that are only relevant for one Page?
Should i look into partitioning the table and if so, how would you recommend that i do this? I was thinking that partitioning by Page, hence getting chunks of values that are only relevant to a certain page would be wise in this case right? How would the script/schema look for something like that where pages could be created/removed at any time by the users.
PS. There should be a badge on this forum for all people that finished reading this long post, and i hope ive made myself clear :)
Just to close this post. Correct indexing solved all performance problems.
This may be slightly off-topic, but why? Is this data that you need to access in real-time, or is it for some later processing? Could you perhaps pack the data into a single row and then unpack it later?
Generic
You say it is slow now and that can be many reasons for that other than the database
like low memory, high CPU, disk fragmentation, network load, sockets problems etc etc.
This should show up on a system monitor
Try for instance Sysinternals (now MS) tool: http://live.sysinternals.com/procexp.exe
But if that is all under control then back to the database.
Database index
One million records is not "that much" and should not be a problem.
An index should do the trick if you don't have any indexes right now.
You should probably set indexes on all tables if you haven't done so already.
I tried to do a database model, is this right:
http://www.freeimagehosting.net/image.php?a39cf99ae5.png
Table structure (?)
Page -> Field -> FieldControl -> ValueGroup -> Value
The table structure looks like it may not be the optimal one but it is hard to say exactly when I don't know how the application works.
Do all tables have the foreign keys of
the table above ?
Is this somewhat similar to your code ?
Pseudo code:
1. Get page info. Gives key "page-id"
2. Get all "Field":s marked with that "page-id".
Gives keys "field-id" & "fieldcontrol-id"
3. Loop trough all fields-id:s and get the FieldControl for each one
4. Loop trough all fields-id:s and get all ValueGroup:s.
Gives a list of "valuegroup-id":s keys
5. Loop trough all ValueGroup:s and get all fields

WPF and LINQ to Entities binding to newly added records

I'm in the process of learning LINQ to Entities and WPF so forgive me if I get some terminology wrong. I have a model which contains Clients and I want the user to be able to bulk enter up to 20 clients at a time (this will be done by data entry staff off a paper list so I want to avoid entering one and saving one).
I was planning on adding 20 new clients to my model and have a datagrid/listbox bound to this.
In LINQ, how do I select out the newly added records to the model? I could rely on certain fields being blank but is there a better method? Alternatively, is there another way of doing this?
DataContext db; // ...
db.GetChangeSet();
The change set will contain lists of newly insert, update and delete operations. If you access that prior to any SubmitChanges you should be able to get what you want. However LINQ does preform inserts in a transactional manner so what is it that you wanna achieve here?

Resources