Rendering a table combining multiple lists: Include empty records in Django annotate - django-models

I am building a table that shows data broken down by grade level. Depending on how I filter by date, there is a solid chance that I will not have any records for a grade level.
For example:
Right now, the formatting of the table looks good only if there are incident records for each grade.
I am using annotate to breakdown incidents by grade, gender, etc. and then I zip all the lists together.
students_by_grade = Student.objects.values('grade').annotate(students_by_grade=Count('id')).order_by('grade')
incidents_by_grade = StudentItem.objects.filter(created_at__gt=relative_range).values('item_student__grade').annotate(incidents_by_grade=Count('id')).order_by('item_student__grade')
male_incidents_by_grade = StudentItem.objects.filter(item_student__gender='M').filter(created_at__gt=relative_range).values('item_student__grade').annotate(male_incidents=Count('id')).order_by('item_student__grade')
female_incidents_by_grade = StudentItem.objects.filter(item_student__gender='F').filter(created_at__gt=relative_range).values('item_student__grade').annotate(female_incidents=Count('id')).order_by('item_student__grade')
context['students_by_grade'] = zip_longest(students_by_grade,incidents_by_grade,male_incidents_by_grade,female_incidents_by_grade,fillvalue='0')
Is there a way I can force a zero into the list if there are no records returned for a grade level? In this case, there is an incident record for 7th grade that shows up at the top.
Here is the result of the zip:
({'grade': '6', 'students_by_grade': 96}, {'item_student__grade': '7', 'incidents_by_grade': 1}, '0', {'item_student__grade': '7', 'female_incidents': 1})
({'grade': '7', 'students_by_grade': 62}, '0', '0', '0')
({'grade': '8', 'students_by_grade': 72}, '0', '0', '0')
({'grade': None, 'students_by_grade': 20}, '0', '0', '0')
Open to other ideas on how I achieve this end result.

Related

Group by and filter based on sum of a column in google apps script

I am trying to group by vendor and id and take the sum of total weight from the below table:
Vendor Id Weight
AAA 1 1234
AAA 1 121
AAA 2 5182
BBB 1 311
BBB 1 9132
BBB 2 108
In the below query, variable 'row' is the input table
I have the below query that groups by Vendor and Id and sums Weight
var res_2 = alasql('SELECT [0] as Vendor,[1] as Id, sum([2]) as Total_Weight FROM ? GROUP BY [0], [1]',[row]);
Result is as follows:
[ { Vendor: 'AAA', Id: '1', Total_Weight: 1355 },
{ Vendor: 'AAA', Id: '2', Total_Weight: 5182 },
{ Vendor: 'BBB', Id: '1', Total_Weight: 9443 },
{ Vendor: 'BBB', Id: '2', Total_Weight: 108 }, ]
My next part is I need to loop over this array and for every unique vendor, I need to take the maximum 'Total_Weight' and get the corresponding 'Id' and push the variables 'Vendor'
and 'Id' to another array.
Hence, the results has to be
[{Vendor: 'AAA', Id: '2'},{Vendor: 'BBB', Id: '1'}]
Can anyone guide me on whether this could be accomplished through a logic or do I need to modify the query as such. Any suggestions would be appreciated.
I see you put a tag google-sheets here. I think your problem can be solved within this tool:
First stage you get using query formula:
=query(B2:D,"select B, C, sum(D) where B is not null group by C, B order by sum(D) desc")
As you have your table sorted by max sum value, you can use vlookup function and take first row for the vendor you need and build a table:
=ArrayFormula(ifna(vlookup(unique(F5:F),F4:H8,{1,2},false)))
Or you can do both stages together (and use query as a table inside vlookup)
=ArrayFormula(ifna(vlookup(unique(B3:B),query(B3:D,"select B, C, sum(D) where B is not null group by C, B order by sum(D) desc"),{1,2},false)))
The result is an array with 2 columns - vendor and ID corresponding to max sum.

Find relevance of search results in Azure Search

I have a Azure search index containing address information of users with fields and corresponding weights as follows:
weights= #{
HouseNumber = '40'
StreetName = '36'
City = '30'
PostalCode = '29'
Province = '25'
Country = '21'
FSA = '20'
Plus4 = '16'
SuiteName = '12'
SuiteRange = '11'
StreetPost = '10'
StreetPre = '8'
StreetSuffix = '6'
}
I am using searchmode as any for querying. How can I decide, that the record with max score is the most relevant one? Means, in case, the user doesn't enter all the keywords of the address, The relevance of the records may vary. E.g., if the string contains keywords like, '1A1' which can be part of the postal code 'A1A 1A1' or may be a housenumber. This will return both the records, but with different scores. How should I fix this?
If a query's terms can match multiple fields (e.g. if '1A1' can match results in the PostalCode and the HouseNumber field), then the scoring profile would be working as expected by boosting each respective result.
You should instrument the application so the query is field-scoped. That way, each part of the query is searching against the proper field and the matches are boosted accordingly.

Creating a postgres sequence grouped per key in a single table

Let's say I have a table in postgres (pseudo code is easier)
Message:
id (UUID)
conversation_id (UUID)
message_seq_id ????
content (Text)
What I want to do is have the database create a sequence for message_seq_id but grouped by conversation_id
Rather than creating sequences for every single conversation, can I just have a counter that is tied to a specific other column in the same table.
so that rows would look like:
Messages:
Message(id='123456', conversation_id = '1', message_seq_id = 1, content 'helloworld')
Message(id='123457', conversation_id = '2', message_seq_id = 1, content 'helloworld')
Message(id='123458', conversation_id = '3', message_seq_id = 1, content 'helloworld')
..........
Message(id='123456', conversation_id = '1', message_seq_id = 2, content 'helloworld')
Such that the message_sequence_id auto increments on a "conversation_id" basis only.
I can do it in code, but would be nice if postgres supported it without having to create thousands of separate sequences.

Update table with SET command IF / ELSE

I'm trying to update a table and i want to runt 2 different SET senarios depending on a stock value.
Working CODE that does one senario.
UPDATE dbo.ar
SET webPublish = '0',
ArtProdKlass = '9999',
VaruGruppKod = '9999',
ItemStatusCode = '9' --Utgått ur sortimentet+
FROM tmp_9999
WHERE ar.ArtNr = tmp_9999.art AND ar.lagsadloartikel < '1'
What i would like to do is that IF last statement (ar.lagsaldoartikel) is >'1'Then i would like it to run this SET:
SET webPublish = '1',
ArtProdKlass = '1',
VaruGruppKod = '9999',
ItemStatusCode = '8'
So something like this i have tested:
IF AR.lagsaldoartikel < '1'
SET webPublish = '0',
ArtProdKlass = '9999',
VaruGruppKod = '9999',
ItemStatusCode = '9' --Utgått ur sortimentet+
FROM tmp_9999
WHERE ar.ArtNr = tmp_9999.art --Väljer ut artiklar som enbart finns i textfilen och har lagersaldo mindre än 1
ELSE
SET webPublish = '1',
ArtProdKlass = '1',
VaruGruppKod = '9999',
ItemStatusCode = '8' --Utgått ur sortimentet
FROM tmp_9999
WHERE ar.ArtNr = tmp_9999.art --Väljer ut artiklar som enbart finns i textfilen och har lagersaldo mindre än 1
Using CASE:
UPDATE dbo.ar
SET webPublish = '0',
ArtProdKlass = '9999',
VaruGruppKod = '9999',
ItemStatusCode = CASE WHEN AR.lagsaldoartikel < '1' THEN '9' ELSE '8' END
FROM tmp_9999
WHERE ar.ArtNr = tmp_9999.art
(If ItemStatusCode et al are numeric you should treat them as such.)
If is part of the procedural T-SQL. You can't use procedural statements inside the relational ones - the only way to use if would be to have two separate update statements, each in one branch of the if. However, that's a bad idea - it's not concurrency-safe.
One way to accomplish what you're trying to do is to use the case statement instead - that's just an expression, so it can be used in the set clause just fine:
set webPublish = case when AR.lagsaldoartikel < '1' then '0' else '1' end
(etc. for the other arguments).
However, I'd like to warn you - this is almost certainly a bad idea. It's probably going to back-fire on you soon in the future, when you realize that there's ten different conditions and a hundred different possible values you might want. Consider using a more idiomatically relational way of doing this - for example, taking the conditions and arguments from a different table - it's not necessary now, but if you ever find your conditions are expanding out of reasonable size, remember to consider changing the whole structure of the command if needed.

#1062 - Duplicate entry '1' for key 1 | Wordpress

Hey guys I/m hoping that somebody here can give me a helping hand...
I'm trying to import a DB for my wordpress site but I'm getting errors.
Here is the error I'm getting!
-- -- Dumping data for table wp_comments -- INSERT INTO wp_comments (comment_ID, comment_post_ID, comment_author, comment_author_email, comment_author_url, comment_author_IP, comment_date, comment_date_gmt, comment_content, comment_karma, comment_approved, comment_agent, comment_type, comment_parent, user_id) VALUES (1, 1, 'Mr WordPress', '', 'http://wordpress.org/', '', '2013-02-19 11:30:17', '2013-02-19 11:30:17', 'Hi, this is a comment.\nTo delete a comment, just log in and view the post's comments. There you will have the option to edit or delete them.', 0, '1', '', '', 0, 0), (2, 81, 'admin', 'support#hpb.com', '', '223.255.245.41', '2013-02-28 13:34:06', '2013-02-28 13:34:06', 'This is test comment...', 0, '1', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0', '', 0, 1), (3, 75, 'admin', 'support#hpb.com', '', '223.255.245.41', '2013-02-28 13:34:21', '2013-02-28 13:34:21', 'This is test comment...', 0, '[...]
It looks like the database you're importing to already has a comment in the database with an ID of 1 and that table has a duplicate id restriction. You can either go through and change all of the IDs of the entries in the dump file or you can probably truncate your current comments table and see if it will let you import it then.

Resources