Fitbit Data Export - Creating a data warehouse - sql-server
I plan to create a Fitbit data warehouse for educational purposes, and there doesn't seem to be any material online for Fitbit data specifically.
A few issues faced:
You can only export 1 month of data (max) at a time from the Fitbit website. My plan would be to drop a month's worth of data at a time into a folder, and have these files read seperately.
You can either export the data through CSV or .XLS. The issue with XLS is that each day in the month will create a seperate sheet for food logs, which will then need to be merged in a staging table. The issue with CSV would be that there is one sheet per file, with all of the data in there: CSV Layout
I would then use SSIS to load the data into a SQL Server database for reporting purposes.
Which would the more suited approach be, to export the data using .XLS format or CSV?
Edit: How would it be possible to load a CSV file into SSIS with such a format?
The CSV layout would be as such:
Body,,,,,,,,,
Date,Weight,BMI,Fat,,,,,,
01/06/2018,71.5,23.29,15,,,,,,
02/06/2018,71.5,23.29,15,,,,,,
03/06/2018,71.5,23.29,15,,,,,,
04/06/2018,71.5,23.29,15,,,,,,
05/06/2018,71.5,23.29,15,,,,,,
06/06/2018,71.5,23.29,15,,,,,,
07/06/2018,71.5,23.29,15,,,,,,
08/06/2018,71.5,23.29,15,,,,,,
09/06/2018,71.5,23.29,15,,,,,,
10/06/2018,71.5,23.29,15,,,,,,
11/06/2018,71.5,23.29,15,,,,,,
12/06/2018,71.5,23.29,15,,,,,,
13/06/2018,71.5,23.29,15,,,,,,
14/06/2018,71.5,23.29,15,,,,,,
15/06/2018,71.5,23.29,15,,,,,,
16/06/2018,71.5,23.29,15,,,,,,
17/06/2018,71.5,23.29,15,,,,,,
18/06/2018,71.5,23.29,15,,,,,,
19/06/2018,71.5,23.29,15,,,,,,
20/06/2018,71.5,23.29,15,,,,,,
21/06/2018,71.5,23.29,15,,,,,,
22/06/2018,71.5,23.29,15,,,,,,
23/06/2018,71.5,23.29,15,,,,,,
24/06/2018,71.5,23.29,15,,,,,,
25/06/2018,71.5,23.29,15,,,,,,
26/06/2018,71.5,23.29,15,,,,,,
27/06/2018,71.5,23.29,15,,,,,,
28/06/2018,71.5,23.29,15,,,,,,
29/06/2018,72.8,23.72,15,,,,,,
30/06/2018,72.95,23.77,15,,,,,,
,,,,,,,,,
Foods,,,,,,,,,
Date,Calories In,,,,,,,,
01/06/2018,0,,,,,,,,
02/06/2018,0,,,,,,,,
03/06/2018,0,,,,,,,,
04/06/2018,0,,,,,,,,
05/06/2018,0,,,,,,,,
06/06/2018,0,,,,,,,,
07/06/2018,0,,,,,,,,
08/06/2018,0,,,,,,,,
09/06/2018,0,,,,,,,,
10/06/2018,0,,,,,,,,
11/06/2018,0,,,,,,,,
12/06/2018,0,,,,,,,,
13/06/2018,100,,,,,,,,
14/06/2018,0,,,,,,,,
15/06/2018,0,,,,,,,,
16/06/2018,0,,,,,,,,
17/06/2018,0,,,,,,,,
18/06/2018,0,,,,,,,,
19/06/2018,0,,,,,,,,
20/06/2018,0,,,,,,,,
21/06/2018,0,,,,,,,,
22/06/2018,0,,,,,,,,
23/06/2018,0,,,,,,,,
24/06/2018,0,,,,,,,,
25/06/2018,0,,,,,,,,
26/06/2018,0,,,,,,,,
27/06/2018,"1,644",,,,,,,,
28/06/2018,"2,390",,,,,,,,
29/06/2018,981,,,,,,,,
30/06/2018,0,,,,,,,,
For example, "Foods" would be the table name, "Date" and "Calories In" would be column names. "01/06/2018" is the Date, "0" is the "Calories in" and so on.
Tricky, I just pulled my fitbit data as this peaked my curiosity. That csv is messy. You basically have mixed file formats in one file. That won't be straight forward in SSIS. The XLS format and like you mentioned the food logs tagging each day on the worksheet, SSIS won't like that changing.
CSV:
XLS:
Couple of options off the top of my head that I see for CSV.
Individual exports from Fitbit
I see you can pick which data you want to include in your export: Body, Foods, Activities, Sleep.
Do each export individually, saving each file with a prefix of what type of data it is.
Then build SSIS with multiple foreach loops and data flow task for each individual file format.
That would do it, but would be a tedious effort when having to export the data from Fitbit.
Handle the one file with all the data
This option you would have to get creative since the formats are mixed and you have sections with difference column definitions, etc.
One option would be to create a staging table with as many columns as which ever section has the most, which looks to be maybe "Activities". Give each column a generic name as Column1,Column2 and make them all VARCHAR.
Since we have mixed "formats" and not all data types would line up we just need to get all the data out first and then sort out conversion later.
From there you can build one data flow and flat file source and also get line number added since we will need to sort out where each section of data is later.
When building out the file connection for your source you will have to manually add all columns since the first row of data in your file doesn't include all the commas for each field, SSIS won't be able to detect all the columns. Manually add the number of columns needed, also make sure:
Text Qualifier = "
Header row Delimiter = {LF}
Row Delimiter = {LF}
Column Delimiter = ,
That should get you data loaded into a database at least into a stage table. From there you would need to use a bunch of T-SQL to zero in on each "section" of data and then parse, convert and load from there.
Small test I did I just had table call TestTable:
CREATE TABLE [dbo].[TestTable](
[LineNumber] [INT] NULL,
[Column1] [VARCHAR](MAX) NULL,
[Column2] [VARCHAR](MAX) NULL,
[Column3] [VARCHAR](MAX) NULL,
[Column4] [VARCHAR](MAX) NULL,
[Column5] [VARCHAR](MAX) NULL,
[Column6] [VARCHAR](MAX) NULL,
[Column7] [VARCHAR](MAX) NULL,
[Column8] [VARCHAR](MAX) NULL,
[Column9] [VARCHAR](MAX) NULL
)
Dataflow and hooked up the file source:
Execute dataflow and then I had data loaded as:
From there I worked out some T-SQL to get to each "Section" of data. Here's an example that shows how you could filter to the "Foods" section:
DECLARE #MaxLine INT = (
SELECT MAX([LineNumber])
FROM [TestTable]
);
--Something like this, using a sub query that gets you starting and ending line numbers for each section.
--Doing the conversion of what column that section of data ended up in.
SELECT CONVERT(DATE, [a].[Column1]) AS [Date]
, CONVERT(BIGINT, [a].[Column2]) AS [CaloriesIn]
FROM [TestTable] [a]
INNER JOIN (
--Something like this to build out starting and ending line number for each section
SELECT [Column1]
, [LineNumber] + 2 AS [StartLineNumber] --We add 2 here as the line that start the data in a section is 2 after its "heading"
, LEAD([LineNumber], 1, #MaxLine) OVER ( ORDER BY [LineNumber] )
- 1 AS [EndLineNumber]
FROM [TestTable]
WHERE [Column1] IN ( 'Body', 'Foods', 'Activities' ) --Each of the sections of data
) AS [Section]
ON [a].[LineNumber]
BETWEEN [Section].[StartLineNumber] AND [Section].[EndLineNumber]
WHERE [Section].[Column1] = 'Foods'; --Then just filter on what sectoin you want.
Which in turn gave me the following:
There could be other options for parsing that data, but this should give a good starting point and a idea on how tricky this particular CSV file is.
As for the XLS option, that would be straight forward for all sections except food logs. You would basically setup an excel file connection and each sheet would be a "table" in the source in the data flow and have individual data flows for each worksheet.
But then what about Food logs. Once those changed and you rolled into the next month or something SSIS would freak out, error, probably complain about metadata.
One obvious work around would be manually manipulate the excel and merge all of them into one "Food Log" sheet prior to running it through SSIS. Not ideal because you'd probably want something completely automated.
I'd have to tinker around with that. Maybe a script task and some C# code to combine all those sheets into one, parsing the date out of each sheet name and appending it to the data prior to a data flow loading it. Maybe possible.
Looks like there are challenges with both of the files Fitbit is exporting out no matter which format you look at.
Related
Query Snowflake Named Internal Stage by Column NAME and not POSITION
My company is attempting to use Snowflake Named Internal Stages as a data lake to store vendor extracts. There is a vendor that provides an extract that is 1000+ columns in a pipe delimited .dat file. This is a canned report that they extract. The column names WILL always remain the same. However, the column locations can change over time without warning. Based on my research, a user can only query a file in a named internal stage using the following syntax: --problematic because the order of the columns can change. select t.$1, t.$2 from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]dat.gz') t; Is there anyway to use the column names instead? E.g., Select t.first_name from #mystage1 (file_format => 'myformat', pattern=>'.data.[.]csv.gz') t; I appreciate everyone's help and I do realize that this is an unusual requirement.
You could read these files with a UDF. Parse the CSV inside the UDF with code aware of the headers. Then output either multiple columns or one variant. For example, let's create a .CSV inside Snowflake we can play with later: create or replace temporary stage my_int_stage file_format = (type=csv compression=none); copy into '#my_int_stage/fx3.csv' from ( select * from snowflake_sample_data.tpcds_sf100tcl.catalog_returns limit 200000 ) header=true single=true overwrite=true max_file_size=40772160 ; list #my_int_stage -- 34MB uncompressed CSV, because why not ; Then this is a Python UDF that can read that CSV and parse it into an Object, while being aware of the headers: create or replace function uncsv_py() returns table(x variant) language python imports=('#my_int_stage/fx3.csv') handler = 'X' runtime_version = 3.8 as $$ import csv import sys IMPORT_DIRECTORY_NAME = "snowflake_import_directory" import_dir = sys._xoptions[IMPORT_DIRECTORY_NAME] class X: def process(self): with open(import_dir + 'fx3.csv', newline='') as csvfile: reader = csv.DictReader(csvfile) for row in reader: yield(row, ) $$; And then you can read this UDF that outputs a table: select * from table(uncsv_py()) limit 10 A limitation of what I showed here is that the Python UDF needs an explicit name of a file (for now), as it doesn't take a whole folder. Java UDFs do - it will just take longer to write an equivalent UDF. https://docs.snowflake.com/en/developer-guide/udf/python/udf-python-tabular-functions.html https://docs.snowflake.com/en/user-guide/unstructured-data-java.html
How to view timestamp when pipe finished copying data from stage to table?
I've created a Pipe from an S3 stage and with a python script I'm generating the timestamps of when I am generating the data from a streaming service into file batches. I would also like to be able to add the timestamp when the files were actually copied into the table from the S3 stage. I've found some documentation regarding the PIPE_USAGE_HISTORY method but although I've already ran for the past days quite a few tests the below returns an empty table. What am I doing wrong? select * from table(information_schema.pipe_usage_history( date_range_start=>dateadd('day',-14,current_date()), date_range_end=>current_date()) )
I found the answer. There is another query I should be using: copy_history The above query would be rewritten as follows select * from table(information_schema.copy_history( table_name => '{replace with your schema.table}', start_time => dateadd(days, -14, current_timestamp()), end_time => current_timestamp()) )
Presto: How to read from s3 an entire bucket that is partitioned in sub-folders?
I need to read using presto from s3 an entire dataset that sits in "bucket-a". But, inside the bucket, the data was saved in sub-folders by year. So I have a bucket that looks like that: Bucket-a>2017>data Bucket-a>2018>more data Bucket-a>2019>more data All the above data is the same table but saved this way in s3. Notice that in the bucket-a itself there is no data, just inside each folder. What I have to do is read all the data from the bucket as a single table adding a year as column or partition. I tried doing this way, but didn't work: CREATE TABLE hive.default.mytable ( col1 int, col2 varchar, year int ) WITH ( format = 'json', partitioned_by = ARRAY['year'], external_location = 's3://bucket-a/'--also tryed 's3://bucket-a/year/' ) and also CREATE TABLE hive.default.mytable ( col1 int, col2 varchar, year int ) WITH ( format = 'json', bucketed_by = ARRAY['year'], bucket_count = 3, external_location = 's3://bucket-a/'--also tryed's3://bucket-a/year/' ) All of the above didn't work. I have seen people writing with partitions to s3 using presto, but what I'm trying to do is the opposite: read from s3 data that is already splitted in folders as single table. Thanks.
If your folders were following Hive partition folder naming convention (year=2019/), you could declare the table as partitioned and just use system. sync_partition_metadata procedure in Presto. Now, your folders do not follow the convention, so you need to register each one individually as a partition using system.register_partition procedure (will be available in Presto 330, about to be released). (The alternative to register_partition is to run appropriate ADD PARTITION in Hive CLI.)
how to load multiple files into multiple destination table in ssis
HI I Have one doubt in ssis, source Location have different files each file name is comes with location name .here we want load each file name corresponding tables using ssis package. source loacation have multiples files for each locationname files; exaple:Files location : c:\Sourcefile\ Filesnames comes like : hyd files,bang files. Hyd files comes like: hyd.txt,hyd1.txt hyd2.txt all are same structure only.hyd related all files load into hyd table only. bang files comes like: bang.txt,bang.txt bang2.txt all are same structure only.bang related all files load into bang table only. all source files and target tables structure are same only. source FIles Structure: for hyd.txt file Id,name,loc 1,abc,hyd 2,hari,hyd for hyd1.txt file id,name,loc 4,banu,hyd 5,ran,hyd similar to bang: id,name,loc 10,gop,bang 11,union,loc for bang1.txt file id,name,loc 14,ja,bang here all hyd related text files load into hyd table in sql server database table. similar to bang fils load into bang table. hyd table structure : CREATE TABLE [dbo].[hyd]( [id] [int] NULL, [name] [varchar](50) NULL, [loc] [varchar](50) NULL ) similar to bang CREATE TABLE [dbo].[bang]( [id] [int] NULL, [name] [varchar](50) NULL, [loc] [varchar](50) NULL ) I tried like below: above one tables names not getting dynamically. i kept statistically values in table variable. that time all location related records are loaded into one table. how to load multiple files into multiple destination table in ssis.please tell me how to achive this task in ssis
From the screenshots i have 3 suggestions: You have to set the Data Flow Task Delay Validation property to True You have to change the User::location variable value outside the Data flow task, you can add an expression task before the data flow task with the following expression #[User::location] = SUBSTRING(#[User::FileName],1,FINDSTRING(#[User::FileName,".",1) -1) or use a script component to achieve this Or you can add a script task followed 2 data flow tasks inside the for each loop, the script task check the filename: if it is hyd it execute the first DFT , if it is bang it execute the second: (check this link: Working with Precedence Constraints in SQL Server Integration Services)
SQL Server FOR XML PATH carriage return after each root node
I am using FOR XML PATH in SQL Server 2014 to generate an XML file to send to one of our vendors. Their system requires that each root node be separated by a carriage return / line break. Here is the T-SQL code I'm using to generate it: Declare #xmldata xml set #xmldata = (SELECT a.StatementDate AS [stmt_date] ,a.CustomerID AS [student_id] ,'Upon Receipt' AS [due_date] ,a.TotalDue AS [curr_bal] ,a.TotalDue AS [total_due] ,a.AlternateID AS [alternate_id] ,a.FullName AS [student_name] ,a.Email AS [student_email] ,a.Addr1 ,a.Addr2 ,a.Msg AS [message] ,( SELECT b.StatementDate AS [activity_date] ,b.ActivityDesc AS [activity_desc] ,b.TermBalance AS [charge] FROM #ActivityXML AS b WHERE a.CustomerID = b.CustomerID ORDER BY a.StatementDate FOR XML PATH('activity'),TYPE ) FROM #BillingStatement AS a FOR XML PATH('Billing')) select #xmldata as returnXml This works great, but returns one long string with no separation between nodes at all. (I would post an example but it would just look like a jumbled up mess in here.) Anyhow, what we need is to generate a file where each <Billing> tag and contents within is placed on a new line after a closing </Billing> tag. I would guess there's a simple solution, such as inserting char(13)+char(10) somewhere in the code, but I've been unable to get that working. Is it possible or will I need to do it in another system?
Based on responses here and research elsewhere, this is not possible using just T-SQL. We would need to either copy / paste the output, or use another program to take the data and insert line breaks. From #Shnugo - "The pretty print of XML is not supported natively within T-SQL. You might use a CLR method, a service or any kind of post processing with a physically stored file. You might open the XML from grid-results' xml viewer and copy-paste the output to a text editor. Don't forget to set the XML size for grid result to unlimited, if your XML is big."