I'm currently creating database for a school with 90 students. When testing out the function that displays all database entries in a table, I've found that only 10 entries can be displayed in the table at one time. Is there any way I can extend or remove this limit?
def DataTable():
def View():
for row in db_actions.GetAllStudents():
print(row)
tree.insert("", tk.END, values = row)
tree = ttk.Treeview(TableView, column=("c1", "c2", "c3", "c4"), show='headings')
tree.column("#1", anchor='center')
tree.heading("#1", text="Number")
tree.column("#2", anchor='center')
tree.heading("#2", text="First name")
tree.column("#3", anchor='center')
tree.heading("#3", text="Surname")
tree.column("#4", anchor='center')
tree.heading("#4", text="Class")
tree.pack()
View()
DataTable()
The role of
for row in db_actions.GetAllStudents():
is to retrieve data from a database in another program. I have tested this and the issue is unlikely to be database-related due to the program working perfectly when there's less than 10 entries.
Related
so recently my principal is having me create a database for a tardy system recently what i've done is set up a google form where the student id can be enter through a num pad and put into a google sheet where ive setup 2 sheets one where it shows the student name and how many tardies they have and then the other is the google form responses, each tardies cell in the other sheet has the formula: =COUNTIF('Form Responses'!B:B,"") which essentialy checks the number of times a certain student id pops up, by ferpa i am not legally allowed access to the student id's is there any possible way i can maked it when a person enters their student id it adds/creates a new formula for the tardies cells to check through the entire list without a duplication or error code?
i have tried the =COUNTIF('Form Responses'!B:B,"") formula but that would make where my principal would have to edit a thousand lines of formula
Depending on the exact setup of your sheets, you can use a formula like this, guessing that the students' IDs are in A column:
=BYROW(A2:A,LAMBDA(each,IF(each="","",COUNTIF('Form Responses'!B:B,each))))
That would set a formula will drag all the column automatically.
Or, instead of having a list of IDs you can set a QUERY that will find all the ID values in the responses and their count:
=QUERY('Form Responses'!B:B,"SELECT B,Count(B) where B is not null group by B")
Short form of my question: Is there a way to split a Collection into several sub-Collections, based on SQL database entries?
Update
Okay, a simpler version, perhaps. I'm really struggling with this...
I start with a collection: ScanDataCollection_SmartComm_MasterList
It looks like this:
Result
REQ1991799.RITM2280596.01
REQ2048874.RITM2349401.01
REQ2037354.RITM2335400.01
I have a database table:
Master_Transaction_Log
...which has three particular columns of interest:
Timestamp
Scan_Code
Transaction_Type
I would like to end up with TWO collections:
SC_ReturnToDepot
Result
REQ1991799.RITM2280596.01
SC_Remainder_1
Result
REQ2048874.RITM2349401.01
REQ2037354.RITM2335400.01
The criteria is as follows: for any given Result in ScanDataCollection_SmartComm_MasterList, if:
A database record has Scan_Code = Result AND Transaction_Type = "New Equipment Delivery - Cust. Msg: Equipment Returning to Depot" AND Timestamp > 72 hours ago, then that value of Result is added to SC_ReturnToDepot
SC_Remainder_1 are all remaining values that do not fit the above criteria.
I got as far as this so far, but it's killin' me after this:
ClearCollect(SC_ReturnToDepot,
ForAll(ScanDataCollection_SmartComm_MasterList,
...?
);
);
I have a feeling if I can just nail that one single line of code, I am off to the races, but this is just... ugh, my brain is being a jerk.
-=-=-=-=-=-=-
Longer more detailed explanation:
I'm trying to create a mechanism that takes a list of Scan Codes from a physical in-field process (scanning codes on boxes on a shelf), and for each Code, and reviews a SQL database table (Master_Transaction_Log), looking for that code. It creates another Collection that includes those associated Scan Codes, in an exclusive fashion.
At a high level:
Collection ScanDataCollection_SmartComm_MasterList, which contains one column Result, must be split out into the seven different lists, below. These lists are in a specific sequence (because we send communications in a specific sequence):
Collection Name: ScanDataCollection_SmartComm_ReturnToDepotImmediately - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code, where Transaction_Type = "New Equipment Delivery - Cust. Msg: Equipment Returning to Depot" and where Timestamp is older than 48 hours. - Notes: The technician will be instructed to immediately place this item into an outgoing bin for pickup. - Data Note: This Scan Code must not be included in any Collection below this one on this list.
Collection Name: ScanDataCollection_SmartComm_AnnounceRemovalOfItem - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code, where Transaction_Type = "New Equipment Delivery - Cust. Msg: Final Warning" and where Timestamp is older than 48 hours. - Notes: The customer receives an email for these items, announcing that the items will be returned to the Depot. - Data Note: This Scan Code must not be included in any Collection below
this one on this list.
Collection Name: ScanDataCollection_SmartComm_SendFinalWarning - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code, where Transaction_Type = "New Equipment Delivery - Cust. Msg: Warning" and where Timestamp is older than 48 hours. - Notes: The customer receives an email for these items, announcing that this is their last chance to pick them up. - Data Note: This Scan Code must not be included in any Collection below this one on this list.
Collection Name: ScanDataCollection_SmartComm_SendFirstWarning - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code, where Transaction_Type = "New Equipment Delivery - Cust. Msg: Reminder" and where Timestamp is older than 48 hours. - Notes: The customer receives an email for these items, announcing that this is a warning to pick them up. - Data Note: This Scan Code must not be included in any Collection below this one on this list.
Collection Name: ScanDataCollection_SmartComm_SendReminder - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code, where Transaction_Type = "New Equipment Delivery - Cust. Msg: First Contact" and where Timestamp is older than 48 hours. - Notes: The customer receives an email for these items, announcing that this is a reminder to collect their order. - Data Note: This Scan Code must not be included in any Collection below this one on this list.
Collection Name: ScanDataCollection_SmartComm_SendFirstContact - Criteria: If there is an entry in the database table where Scan_Code = the given Scan Code and where Transaction_Type = "New Equipment Delivery - Received at Stockroom/Tech Bar". (note: no time delay -- this should go out immediately upon arriving at a location) - Notes: The customer receives an email for these, announcing the orders are ready to pick up.
Collection Name: ScanDataCollection_SmartComm_NoActionTaken - Criteria: The given Scan Code (Result) has not been added to any of the items above, based on previous logical rules. For example, a Reminder might have been sent, but it was sent only ten hours ago, so we take no action on this item.
Each of these COLLECTIONS only needs a single column: Result.
It's important that no single Scan Code reside in more than one collection -- they are built to address an escalating notification sequence. For example, a Scan Code for an item that has been on a shelf for a week may already have a First Contact sent, a Reminder sent, and a Warning sent. It must ONLY go into the ScanDataCollection_SmartComm_SendFinalWarning Collection, because all items in that Collection will be sent Final Warnings.
I already have written the parts of the program that generates and sends the appropriate emails (including householding, which was a tough nut to crack). I think all I need is to be able to split my Master Collection into these sub-Collections and then I can simply attack each sub-Collection using its own ForAll loop.
I would dearly dearly appreciate advice on this! And of course, I'm happy to offer clarifications where needed.
-=-=-=-=-
Larger perspective: Right now, our techs organize incoming physical boxes across different physical shelves, and each day they move sections of boxes and perform different comms based on shelf. For example, Shelf #3 gets Reminders sent, and then all items are moved to Shelf #4 for the next day. But as our production system ramps up, more boxes arrive each day. So instead of having the technicians determine which comm to send based on which shelf, and moving dozens of boxes each day, I just want them to scan the entire room and then let the tool look into the database and decide what is the next most appropriate comm to send. This will save them about 20-40 minutes a day. Furthermore, in the near future, once this system is working well, I plan to centralize comms from a single technician, instead of each tech at each remote location performing this function. But later, later...
In my Anylogic model I succesfully create plots of datasets that count the number of trucks arriving from terminals each hour in my simulation. Now, I want to add the actual/"observed" number of trucks arriving at a terminal, to compare my simulation to these numbers. I added these numbers in a database table (see picture below). Is there a simple way of adding this data to the plot?
I tried it by creating a variable that reads the database table for every hour and adding that to a dataset (like can be seen in the pictures below), but this did not work unfortunately (the plot was empty).
Maybe simply delete the variable and fill the dataset at the start of the model by looping through the dbase table data. Use the dbase query wizard to create a for-loop. Something like this should work:
int numEntries = (int) selectFrom(observed_arrivals).count();
DataSet myDataSet = new DataSet(numEntries);
List<Tuple> rows = selectFrom(observed_arrivals).list();
for (Tuple
row : rows) {
myDataSet.add(row.get( observed_arrivals.hour ), row.get( observed_arrivals.terminal_a ));
}
myChart.addDataSet(myDataSet);
You don't explain why it "didn't work" (what errors/problems did you get?), nor where you defined these elements.
(1) Since you want both observed (empirical) and simulated arrivals per terminal, datasets for each should be in the Terminal agent. And then the replicated plot (in Main) can have two data entries referring to data sets terminals(index).observedArrivals and terminals(index).simulatedArrivals or whatever you name them.
(2) Using getHourOfDay to add to the observed dataset is wrong because that just returns 0-23 (i.e., the hour in the current day for the current model date). Your database table looks like it has hours since model start, so you just want time(HOUR) to get the model time in elapsed hours (irrespective of what the model time unit is). Or possibly time(HOUR) - 1 if you only want to update the empirical arrivals for the hour at the end of that hour (i.e., at the same time that you updated the simulated arrivals).
(3) Using a Variable to get the database value each hour doesn't work because a variable's initial value is only evaluated once at model initialisation. You want an hourly cyclic Event in Terminal instead which adds the relevant row's value. (You need to use the Insert Database Query wizard to generate the relevant Java code for the query you need in the event's action.)
(4) Because you have a database table with specifically-named columns for each terminal (columns terminal_a and presumably terminal_b etc.) that makes it slightly more awkward. (This isn't proper relational table design where, instead of 4 columns for the 4 terminals, you'd instead have two columns for terminal_id and observed_value with a row for each time period and terminal combination.)
So your database query expression (in your Terminal agents) will need to use the SQL format (not the QueryDSL format) so that you can 'stitch in' the correct column name into the SQL.
I laid out the report, and I do a query that returns 30+ rows for a given period of time. One for each workflow. Now, I want to take that dataset (so the query only runs once) and define 6 datasets from it that filter it based on a single selected row. I will populate 8 boxes on the form for each of those datasets.
It appears that when you create a new dataset, it wants to go back to the datasource and ask you about all the data from that one again.
I was able to create a dataset that is a filtered view of the query and figured I could live with creating 6 datasets that ran the query each time filtering it differently each time. So, I need to do a 'copy' on the dataset and 'paste' it back in as a new dataset that is the same as the other one except with a new name.
I also need to set the default values for the Start/End date to be the 1st of last quarter, and the first day of this quarter. Is there a way to create calculated default values or do I need to do a query to return that?
if you are using table objects to create your report. you could use just 1 data set and add your filters to the table accordingly. just go to tablix properties>> filters tab.
for default values you can create expressions like for the first day of the current quarter:
=switch(DatePart(DateInterval.Quarter,today()) = 1, cstr(year(today))+"-01-01",
DatePart(DateInterval.Quarter,today()) = 2, cstr(year(today))+"-04-01",
DatePart(DateInterval.Quarter,today()) = 3, cstr(year(today))+"-07-01",
DatePart(DateInterval.Quarter,today()) = 4, cstr(year(today))+"-10-01")
I have a MVC application in which I need to display the data from 3 tables. I am using entity model for it. Out of these, in 2 I have made the association:users and payment table.
And 3rd table month_<monthid> is created every month to store the users to whom the magazine is sent. The table name month_<monthid> is generated dynamically by selecting the month so in order to fetch the data I have used ExeuteStoreQuery. For small amount of data the listing is fast but for large amount it is very slow.
Now I have created a class to bind to grid which will include all the fields from the 3 tables to display.
But here when I am getting the large volume of data about 12000 then it is taking about 30 min to go through the loop and assigning the data to the class object and then adding to the list of the result which is finally binded to telerik grid.
I am hereby attaching the sample code using a link. Is there any direct way to bind the query result of joined tables to grid instead of going through the loop and preparing the list for the model class I think that will save time.
The code block of preparing the list using the Executestorequery is under the function GetuserList().
foreach (var r in result)
{
Result objresult = new Result();
var paymentresult = from sub in dtpayment.AsEnumerable() where sub.Field<int>("user_id") == r.user_id select sub;
if (paymentresult.Count() > 0)
{
objresult.amount_paid = paymentresult.FirstOrDefault().Field<decimal>("amount_paid");
objresult.magzine_id = paymentresult.FirstOrDefault().Field<int>("magzine_id");
}
objresult.address=r.address;
objresult.email=r.email;
objresult.name=r.name;
objresult.user_id=r.user_id;
objresult.month= smonth;
lstresult.Add(objresult);
}
This code block of for loop is taking very time where I am using ExceuteStoreQuery.
But I have observed that simply by joining the users and payment table using LINQ query to get all the 12000 records i.e no involvement of month table the result is appearing faster.
So,can you suggest any way to improve the performance of my application?
Also include the database structure with the sample code.
Below is the link to sample
http://sampletestone.s3.amazonaws.com/magzine.7z?AWSAccessKeyId=AKIAINHDRCMKC5GUSNFA&Expires=1303583399&Signature=8o8Wn6UNjbEl3dIyipAX9xH29Hg%3D
supriya
Nobody in the world wants to see 12,000 records. You should look at implementing paging & searching functionality.