I'm new to file operation and I want to insert data from this table into a text file. How can I do it?
Let's say if my table is in below:
Deal & Promotion
Option
Detail
Card Price (include initial store value)
Reload
Total Payment
My30
-
30 days
-
30
30
MyCity
1 day
Reload
-
5
5
MyCity
1 day
First time
10
5
15
MyCity
3 days
Reload
-
15
15
MyCity
3 days
First time
10
15
25
Concession Card
Student
Fare discount 50%
15
(user define values)
Minimum reload: 15(user define values)
Concession Card
Senior Citizen
Fare discount 50%
15
(user define values)
Minimum reload: 15(user define values)
Concession Card
Disability
Fare discount 50%
15
(user define values)
Minimum reload: 15(user define values)
Let say if I want to transform all of them to a text file like this:
FILE *input;
input = fopen("pricelist.txt","w");
fprintf(input,"My30 30Days FirstTime&Reload 30.00\nMyCity 1Day Reload 5.00\nMyCity 1Day FirstTime 15.00\nMyCity 3Days Reload 15.00\nMyCity 3Days FirstTime 25.00\nConcessionCard Student Benefit50%% >=15.00\nConcessionCard SeniorCitizen Benefit50%% >=15.00\nConcessionCard OKU Benefit50%% >=15.00");
fclose(input);
The example text file that I have build is like this:
I cannot enter spacing for 2 words so I stick them together. Is there any other way to insert those data similar to the table above?
After transforming all the data into the text file how can I call or passing these data or values in my coding later on?
You can create a serialization and deserialization method in your program so that when you write or read the text file, it will keep the table format.
Serialization: You will write the file, in a specific order, using separators in the middle of the values.
Deserialization: You will read the file, detecting the separators dinamically, and assigning the values, in the same order you wrote them, to a matrix.
Related
I am trying to calculate the avarage of durations from the last 40 days for diffrent IDs.
Example: I have 40 days and for each day IDs from 1-20 and each ID has a start date and end date in HH:MI:SS.
My code is a cursor which fetches the last 40 days, then I made a second for loop. In this one I select all the ids from this day. Then I go through every ID for this day and select start and end dat calculating the duration. So far so good. But how do I calculate the avarage of the duration for the IDs in the last 40 days.
The idea is simple. To take the durations for one id (in the last 40 days) add them together and divide them by 40. And then do the same for all IDs. My plan was to make a 2d Array and in the first array putting all IDs, then in the second array to put the duration and add the values for one id together. Then I would have added all the durations for one ID together and get the value from the array. But I am kinda stuck in that idea.
I also wonder if there is a better solution.
Thanks for any help!
From my point of view, you don't need loops nor PL/SQL - just calculate the average:
select id,
avg(end_date - start_date)
from your_table
where start_date >= trunc(sysdate) - 40
group by id
Drawback might be what you said - that you stored dates as hh:mi:ss. What does it mean? That you stored them as strings? If so, most probably bad idea; dates (as Oracle doesn't have a separate datatype for time) should be stored into DATE datatype columns.
If you really have to work with strings, then convert them to dates:
avg(to_date(end_date, 'hh:mi:ss') - to_date(start_date, 'hh:mi:ss'))
Also, you'll then have to have another DATE datatyp column which is capable of saying what "last 40 days" actually means.
Result (the average) will be number of days between these values. Then you can format it prettier, if you want.
I want to do a forecast model in excel, which automatically calculates the expected monthly values of a certain variable.
Table A (Output table):
here, I want to show the expected end of month value for a patientF.ex. I want to forecast the total value of X for Person A for the whole month of October
enter image description here
Table B (Data table):
Here, I receive a daily import from an SQL database with the relevant Person Data for that day
F.ex. on the 15.10.2021 I would receive the following:
enter image description here
In short, I would like to do the following calculation in my output table B:
Return Value of "Number of X", given that "Patient" =Person A for the MM/YYYY match (if successful, this should show 13 for person A/October in the output table)
Secondly, I want the above value to be divided by the number of days of that particular date (so 15 days in this example)
Thirdly, I want to multiply it by the total number of days for that given month (as indicated in the Output table/Date section)
I have tried different sumifs/array formulas but I really struggle with one consolidated formula. Any help/tips much appreciated!
I am working with a CSV file and I need 100+ numbers which are all different to equal the number 10. I would something like a cell to look like this: 76 - ("Unknown" Number) = 10
The way the CSV file is set up, the first column is the item price (e.g 79.99), and it needs to equal 69.99 in the sale column, which I am trying to edit, and it displays $10 on our site. We have different prices for each product.
try:
=ARRAYFORMULA(IF(A2:A="",,A2:A-10))
I have data generated on daily basis.
let me explain through a example:
On World Market, the price of Gold change on seconds interval basis. and i want to store that price in Redis DBMS.
Gold 22 JAN 11.02PM X-price
22 JAN 11.03PM Y-Price
...
24 DEC 11.04PM X1-Price
Silver 22 JAN 11.02PM M-Price
22 JAN 11.03PM N-Price
I want to store this data on daily basis. want to apply ML (Machine Leaning) on last 52 Week data. Is this possible?
Because As my knowledge goes. redis work on Key Value.
if this is possible. Can i get data from a specific date(04 July) and DateRange(01 Feb to 31 Mar)
In redis, a Sorted Set is appropriate for time series data. If you score each entry with the timestamp of the price quote you can quickly access a whole day or group of days using the ZRANGEBYSCORE command (or ZSCAN if you need paging).
The quote data can be stored right in the sorted set. If you do this make sure each entry is unique. Adding a record to a sorted set that is identical to an existing one just updates the existing record's score (timestamp). This moves the old record to the present and erases it from the past, which is not what you want.
I would recommend only storing a unique key/ID for each quote in the sorted set, and store the data in it's own key or hash field. This will allow you to create additional indexes for the data as needed and access specific records more easily if necessary.
I am working on a Cassandra data model for storing time series (I'm a Cassandra newbie).
I have two applications: intraday stock data and sensor data.
The stock data will be saved with a time resolution of one minute.
Seven datafields build one timeframe:
Symbol, Datetime, Open, High, Low, Close, Volume
I will query the data mostly by Symbol and Date. e.g. give me all data for AAPL between 2013-01-01 and 2013-01-31 ordered by Datetime.
The recommendation for cassandra queries is to query whole columns. So you could create five rows with the keys Open, High, Low, Close, Volume. And for each Symbol and Minute an own column. E.g. "AAPL:2013-01-04T130400Z".
This would result in a table of five rows and n*NT columns where n = number of symbols, nT = number of minutes.
Most of the time I will query date ranges. I.e. all minutes of a day. So I could rearrange the data to have columns named "AAPL:2013-01-04" and rows: OpenT130400Z, HighT130400Z, LowT130400Z, CloseT130400Z, VolumeT130400Z.
This would result in a table with n*nD columns (n: number of Symbols, nD: number of Days) and 5*nM rows (nM: number of minutes/entries per day).
To sum up: I have columns, which hold the information for a whole day for one symbol.
I have found a description how to deal with time series data in cassandra here http://www.datastax.com/dev/blog/advanced-time-series-with-cassandra
But I don't really get, if they use the hour (1332960000) as a column name or as a row key!?
I understood they use the hour as row key and have the small timesteps as columns. So they would have a fixed column number. But that would have disadvantages in reading because I would have to do a range query on keys! Am I right?
Second question:
If I have sensor data, which is much more fine grained than 1 minute stock data (let's say I have to save timesteps with a resolution of microseconds) how would I deal with this?
If I use columns for saving a composite of sensor channel and hours, and rows for microseconds since the last hour this would result in 3,600,000,000 rows and n*nH columns (n: number of sensors, nH: number of Hours).
I could not use the microseconds since last hour for columns because I have 3,6 billion points which is higher than the allowed number of 2 billion columns.
Did I get it?
What do you think about this problem? How to solve it?
Thank you!
Best,
Malte
So I have a suggestion for your first question about the stock data. A naive implementation might look like this:
RowKey:
Column Format:
Name: The current datetime granular to a minute
Value: a composite column of Open,High,Low,Close,Volume
So you would have something like
AAPL = [2013-05-02-15:38:00 | 441.78:448.59:440.63:15066146:445.52] ... [2013-05-02-15:39:00 | 441.78:448.59:440.63:15066146:445.52] ... [2013-05-02-15:40:00 | 441.78:448.59:440.63:15066146:445.52]
That would give you roughly half a million columns in one year so it might be ok for maybe 4 years. I wouldn't go and attempt to hit the 2 billion limit. What you could do is define a splitting factor on the row key. It all depends on your usage pattern, but a simple one might be on the year so the column family entry might look like this with a composite row key and that would guarantee that you always have less than a million columns per row.
AAPL:2013 = [05-02-15:38:00 | 441.78:448.59:440.63:15066146:445.52] ... [05-02-15:39:00 | 441.78:448.59:440.63:15066146:445.52] ... [05-02-15:40:00 | 441.78:448.59:440.63:15066146:445.52]