how store data for tradingview chart multi time frame? - database

For showing data in the trading view chart in a multi-time frame, how should i store data? Should I store minute data and produce another time frame with them? If yes, how? Or should i store data for each separate time frame? I have already tested it with a 1 day data trading view chart that easily produces higher time frames.
its obvious higher time frames can be created by lower time frame data but for one 4 hour candle i need 240 1 min candle. its huge data. and it will make the chart slow to retrieve 240 data for one candle. is there any better way?
what if i filter data from server and then send them to user? but it also required to much load.

Related

Continuously changing Big Data and C#

I have a dataset with {A LOT OF} rows containing continuously correcting data of 50000 assets of 150 customers for well over a decade and a half.
I want to achieve the following:
Storing data and changing data should be done at a reasonable speed
Getting data must be super fast
The customer should be able to retrieve data of day/month/quarter/annual time range for up to 10000 assets at a time
All I need is the best way to implement a data storage solution for this dataset.
I have tried SQL Server - it's slow; Azure Table storage getting data for 10K assets results in 10K queries .. so no good.
Google big data have the same problem.
Any help appreciated

How to reduce size of data sent from database?

I want do reduce size of data I am receiving from mongoDB Atlas, hence speeding up waiting time.
I am using nodeRed as my back-end and I am retrieving power consumption data from database to show it in charts on front-end. I have tried to query database only for specific parameters like current and timestamp, hoping to reduce size of received data.
Collection consists of such documents:
{
_id:5d398ed7b4926101926e43bb
voltage:233.1302947998047
current:0.03415190055966377
fr:50.01874923706055
timestamp:2019-07-25T11:13:27.582+00:00
}
Query I have tried:
find({},{voltage:1, timestamp:1, _id:0})
I expect to make query which reduces data sent from DB on half or third by only sending every second or third document. Currently I am receiving all data for specific value and it takes around 2 min to receive everything.

How to use the ChronicleMap as a TimeSeries Database?

I have a backtesting framework that needs to replay tick level market data in order. I am currently using Cassandra where my schema is structured to have all ticks for a single trade date in 1 row. Each column represents a single tick. This makes the backtesting framework simple because it can play date ranges by pulling one date at a time in sequence.
I would like to use ChronicleMap and compare its performance with Cassandra.
How do you model ChronicleMap to support the schema of 1 row per tick data?
ChronicleMap is designed to be a random access key-value store.
For back testing, most people use Chronicle Queue to store ordered events. You can use it to store any type of data in order. To look up by time you can search on a monotonically increasing field with a binary search or a range search.
Note: Chronicle Queue is designed to record data in your application in realtime, i.e. less than a micro-second overhead. You can replay this data either as it happens or later as historical data. It is designed to support GC free writing and reading.

How to save frequent received data in database?

Me and 10 students are doing a big project where we need to receive temperature data from hardware in form av nodes, that should be uploaded and stored on a server. As we are all engineers in embedded systems and having minor database knowledge, I am turning to you guys.
I want to receive data from the nodes lets say, every 30 seconds. The table that will store that data in the database would quickly become very long if you store: [nodeId, time, temp] in a table. Do you have any suggestions how to store the data in another way?
A solution could be to store it like mentioned for a period of time and then "compromize" it somehow to a matrix of some sort? I still want to be able to reach old data.
One row every 30 seconds is not a lot of data. It's 2880 rows per day per node. I once designed a database which had 32 million rows added per day, every day. I haven't looked at it for a while but I know it's currently got more than 21 billion rows in it.
The only thing to bear in mind is that you need to think about how you're going to query it, and make sure it has appropriate indexes.
Have fun!

How to store and retrieve large numbers of data points for graphical visualization?

I'm thinking about building a web-based data logging and visualization service. The basic idea is that at some timed interval something (e.g. a sensor) reports a value (e.g. temperature) to the server. The server records this value into a database. There would be a web-based UI that allows me to view this data on a time-based graph. Ideally this graph would have various resolutions (last 30 seconds, last week, last year, etc). In a super ideal world, I would be able to zoom into the data for any point in time.
The problem is that the sensors are going to generate enormous amounts of data. For example, a sensor that reports a value every 5 seconds will generate about 18k values a day. I'm imagining a system that has thousands of sensors. Over time, this becomes lots of data.
The naive solution is to throw this data into a relational database and retrieve it in the various ways I want, but that won't scale.
The simple solution is to reduce the amount of data by performing periodic roll-ups of the data. New data might go into a table that has data points every 5 seconds. Every hour, some system pumps this data into another table that has data points every minute and the original data is deleted. This repeats for a few levels. The downside to this is that the further back in time you go, the less detailed the data is. That's probably fine. I would imagine that I would need enormous amounts of hardware to support full resolution of data over all time as compared to a system with this sort of rollup.
Is there a better way to do this? Is there an existing solution? I have to imagine this is a fairly common problem.
You probably want a fixed sized database like RRDTool: http://oss.oetiker.ch/rrdtool/
Also Graphite is built on top of a similar datastore implementation: http://graphite.wikidot.com/

Categories

Resources