What SQL do I use to convert (big)datetime to show microseconds? - sybase

Sybase docs show how to convert to several date/time formats, but none show the microseconds that bigdatetime support. What CONVERT style can I use?
For instance:
SELECT CURRENT_BIGDATETIME()
-------------------
May 8 2019 4:30PM
^^^ I want microseconds here

I found several styles for which I could not find documentation, but they work for me, for instance,
SELECT CONVERT(CHAR(50),CURRENT_BIGDATETIME(),138)
May 8 2019 4:47:55.706489PM
Here are the styles I found and what their output looks like:
Style Output
36 4:34:28.070375PM
37 16:34:28.073989
38 May 8 19 4:34:28.077720PM
39 May 8 19 16:34:28.081418
40 19-05-08 16:35:41.892544
136 4:37:10.291454PM
137 16:37:10.295289
138 May 8 2019 4:37:10.299460PM
139 May 8 2019 16:37:10.304023
140 2019-05-08 16:37:10.308430
141 May 8 2019 4:37:10.312575PM
(this is on Adaptive Server Enterprise/15.7...)

Related

SQL Query for Pagination

I have students data for their fees paid for each program, now I want to show the outstanding fees, now since it's possible student could have their outstanding fees for 2018, 2019 and 2020 pending, hence it will have 3 rows (The months will be in columns). Now since the student is same, I will be clubbing the records in the front end, now if I consider pagination and I have 10 per page limit, and in these 10 records 3 records is of the same student (since year was different), in that case I will end up having just 7 records on given page.
Here's the sample data.
Studentname RollNo Year Program Jan Feb Mar Apr May Jun ...
abc 1 2018 p1 200 50 10 30 88 29
abc 1 2019 p1 100 10 20 50 12 22
abc 1 2020 p1 30 77 33 27 99 100
xyz 2 2020 p2 88 29 32 99 199 200
How could I manage pagination for above case.
Assuming your front end is HTML/CSS/Javascript:
You don't need to handle pagination in your query - or even your backend - at all. Everything can and should be done on your frontend. I would suggest using JQuery and Bootstrap to create a paginated table to display your data using Material Design for Bootstrap

COLUMNS_UPDATED() skips a bit starting with columns in the middle of the table

I'm using COLUMNS_UPDATED() in a trigger to identify those columns whose values should be written to an audit table. The trigger / auditing had been working fine for multiple years. I noticed yesterday that the auditing is no longer working consistently.
I've listed the first forty columns of the table in question at the bottom for reference, along with the ORDINAL_POSITION from INFORMATION_SCHEMA.COLUMNS. The table has a total of 109 columns.
I added print COLUMNS_UPDATED() to my trigger to get some debug info.
When I update CurrentOnFleaTick, the 9th column, I see this printed:
0x0001000000000000000000000000
This is expected - the 9th column should be represented as the least significant bit of the second byte. Similarly, if I update HasAttackedAnotherAnimalExplanation I see this:
0x0000010000000000000000000000
Again, expected - the 17th column should be represented as the least significant bit of the third byte.
But... when I update HouseholdIncludesCats, I see this:
0x0000000200000000000000000000
Not expected! Where you see the 2 there should be a 1, as HouseholdIncludesCats ordinal position is 25, making it the first column represented in the fourth byte, which should be represented in the least significant bit of that byte.
I narrowed things down by updating every column between HasAttackedAnotherAnimalExplanation and HouseholdIncludesCats and found that the 'off by one' problem I'm having starts with HouseTrainedId, ordinal position 24. When updating HouseTrainedId I'm expecting
0x0000800000000000000000000000
but instead I get
0x0000000100000000000000000000
which I believe is wrong, and it is what I expect to be getting for updates to the HouseholdIncludesCats column.
I don not believe the mask should skip ahead. The mask is currently not using the most significant bit of the 3rd byte.
I did recently drop a column, but I don't have a record of its ordinal position. Based on the original code that would have created the table, I believe the ordinal position of the column that was dropped was NOT 24. (I think it was 7... It had been defined after the BreedIds.)
I'm not necessarily looking for a deep root cause determination. If there was something I could do to reset whatever internal data SQL Server uses that'd be fine. Sort of like a rebuild index idea for table metadata? Is there something like that that might fix this?
Thanks in advance for helpful answers! :)
COLUMN_NAME ORDINAL_POSITION
PetId 1
AdopterUserId 2
AdoptionDeadline 3
AgeMonths 4
AgeYears 5
BreedIds 6
Color 7
CreatedOn 8
CurrentOnFleaTick 9
CurrentOnHeartworm 10
CurrentOnVaccinations 11
FoodTypeId 12
GenderId 13
GuardianForMonths 14
GuardianForYears 15
HairCoatLength 16
HasAttackedAnotherAnimalExplanation 17
HasAttackedAnotherAnimalId 18
HasBeenReferredByShelter 19
HasHadTraining 20
HasMedicalConditions 21
HasRecentlyBittenExplanation 22
HasRecentlyBittenId 23
HouseTrainedId 24
HouseholdIncludesCats 25
HouseholdIncludesChildren5to10 26
HouseholdIncludesChildrenUnder5 27
HouseholdIncludesDogs 28
HouseholdIncludesOlderChildren 29
HouseholdIncludesOtherPets 30
HouseholdOtherPets 31
KnowsCommandDown 32
KnowsCommandPaw 33
KnowsCommandSit 34
KnowsCommandStay 35
KnowsOtherCommands 36
LastUpdatedOn 37
LastVisitedVetOn 38
ListingCodeId 39
LitterTypeClumping 40
So... I thought I had googled enough before posting this, but I guess I hadn't. I found this:
https://www.sqlservercentral.com/forums/topic/columns_updated-and-phantom-fields
using COLUMNPROPERTY() to get ColumnID is definitely the way to go.

SQL Cube-ISO 8601 Calendar insert 445 months

I'm quite new to SSAS and currently building a cube. Everthing is OK so fart except that I need to use the ISO 8601 Calendar but unfortunately the built-in doesn't contains the months but only year,week,day.
What I want to achieve is to add months with 445 patterns like reporting months but using weeks,days,start/end of year of ISO 8601 calendar.
Is it possible to achieve this without editing manually the calendar table?
Thanks
John
Exactly! I've finally done something using this kind of logic:
WHEN [ISO_8601_Week_Of_Year]>=1 AND [ISO_8601_Week_Of_Year]< 5 THEN 'January,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>4 AND [ISO_8601_Week_Of_Year]< 9 THEN 'February,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>8 AND [ISO_8601_Week_Of_Year]< 14 THEN 'March,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>13 AND [ISO_8601_Week_Of_Year]< 18 THEN 'April,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>17 AND [ISO_8601_Week_Of_Year]< 22 THEN 'May,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>21 AND [ISO_8601_Week_Of_Year]< 27 THEN 'June,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>26 AND [ISO_8601_Week_Of_Year]< 31 THEN 'July,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>30 AND [ISO_8601_Week_Of_Year]< 35 THEN 'August,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>34 AND [ISO_8601_Week_Of_Year]< 40 THEN 'September,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>39 AND [ISO_8601_Week_Of_Year]< 44 THEN 'October,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>43 AND [ISO_8601_Week_Of_Year]< 48 THEN 'November,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
WHEN [ISO_8601_Week_Of_Year]>47 AND [ISO_8601_Week_Of_Year]<= 52 THEN 'December,'+REPLACE(ISO_8601_Year_Name,'ISO8601 Calendar','')
And then used another query like this:
SELECT MIN(ISO_8601_Week) AS 'Month_date',Month_Name
FROM [Reporting].[dbo].[SSAS_Calendar]
GROUP BY Month_Name
ORDER BY Month_date
Month_Name being the created column and Month_Date returned the beginning of each month.
You can add "Reporting Calendar" in Visual Studio. It can have 445 pattern. But for me it gives wrong numbers to weeks i.e. week gets the year of its Monday not its Thursday (as it should be according to ISO).
Choose "New dimension..." and then one of time dimension options.

SQL Server 2008 Varbinary(Max) column - 28Mb of images creating 3.2Gb database

This is the first time I've tried to store images in my DB instead of the file server and I'm regretting it so far. I can't use filestream because my host doesn't support it so I'm using a varbinary(max) column. I'm keeping track of the image sizes I insert and there's about 28Mb so far but the database is at 3.2Gb which is just crazy. Am I better to varbinary(XXXX) to reduce this - is SQL Server reserving space for the MAX?
Using MS SQL Server 2008 btw
Here is the top Table sizes:
TableName RowCounts TotalSpaceKB UsedSpaceKB UnusedSpaceKB
Municipality 1028316 64264 64232 32
Image 665 33616 33408 208
User 320 248 224 24
SettingUser 5910 264 160 104
Region 1418 136 136 0
ImageUser 665 56 56 0
ConversationItem 164 56 56 0
Setting 316 48 48 0
Culture 378 40 40 0
UserTrack 442 40 40 0
Numbers 1000 32 32 0
Country 240 32 32 0
Conversation 52 32 32 0
CountryIp 0 88 32 56
ReportUser 0 16 16 0
ConversationItemImage 0 16 16 0
Here's the result for exec sp_spaceused:
database_size unallocated space
3268.88 MB 0.84 MB
reserved data index_size unused
359592 KB 291744 KB 66600 KB 1248 KB
I should probably also mention that there is a Geography Column on the Municiplity Table too in case this has any impact due to spatial indexes... I've used this plenty of times in the past and had no issues but I've never had 1M+ records either usually less than 20k
Make sure that all that space is being used by the actual data, and not the log file.
Shrinking the log file will only remove unused space. In order to clear entries before shrinking it, you would need to backup or truncate the log before hand (Warning: If you care at all about your log chain, this could break it)

Hive Query - Pivot Table by First and Last Entry of Date

To start here is some sample data
Sample Input
ID Date Value
10 2012-06-01 00:01:45 20
10 2012-06-01 00:01:51 12
10 2012-06-01 00:01:56 21
10 2012-06-01 00:02:01 43
10 2012-06-01 00:02:06 12
17 2012-06-01 00:02:43 64
17 2012-06-01 00:02:47 53
17 2012-06-01 00:02:52 23
17 2012-06-01 00:02:58 45
17 2012-06-01 00:03:03 34
Desired Output
ID Date
10 2012-06-01 00:01:45 2012-06-01 00:02:06 20 12
17 2012-06-01 00:02:43 2012-06-01 00:03:03 64 34
So I am looking to get the first and last date, and values for both into a single line. The ID value in my table will also have other entries at later dates, so I only want to get the first and last for a chain of entries. Each entry is 5 secs apart. If they are greater then that it is a new chain.
Any suggestions?
Thanks
I'm just beginning the search process on this but it looks like LATERAL VIEW and EXPLODE coupled with maybe a user defined function or two are your friend.
I ended up creating a MapReduce job to work on the csv files of my data instead of using hive.
I "mapped" based on ID. Then set a parameter where if data's were further then 2 hours I separated them.
In the end it was easily to hack the MapReduce code then ponder hive queries.

Resources